link
stringlengths 41
45
| date
stringlengths 9
9
| paper
dict | reviews
listlengths 1
6
| version
int64 1
5
| main
stringlengths 38
42
|
|---|---|---|---|---|---|
https://f1000research.com/articles/7-501/v1
|
26 Apr 18
|
{
"type": "Opinion Article",
"title": "The case for openness in engineering research",
"authors": [
"Devin R. Berg",
"Kyle E. Niemeyer",
"Kyle E. Niemeyer"
],
"abstract": "In this article, we review the literature on the benefits, and possible downsides, of openness in engineering research. We attempt to examine the issue from multiple perspectives, including reasons and motivations for introducing open practices into an engineering researcher's workflow and the challenges faced by scholars looking to do so. Further, we present our thoughts and reflections on the role that open engineering research can play in defining the purpose and activities of the university. We have made some specific recommendations on how the public university can recommit to and push the boundaries of its role as the creator and promoter of public knowledge. In doing so, the university will further demonstrate its vital role in the continued economic, social, and technological development of society. We have also included some thoughts on how this applies specifically to the field of engineering and how a culture of openness and sharing within the engineering community can help drive societal development.",
"keywords": [
"open science",
"open engineering",
"engineering",
"open access",
"research dissemination"
],
"content": "Introduction\n\nWorking openly should be the default mode of science—after all, how can we advance knowledge “by standing on the shoulders of giants”a if we cannot access or see those shoulders? This paper operates on the following definition of open science:\n\nOpen science, or more broadly open research, describes the activity of performing scientific research in a manner that makes products and findings accessible to anyone. This includes sharing data openly (open data), publicly releasing the source code for research software (open-source software), and making the written products of research openly accessible (open access).\n\nThe field of engineering provides an interesting case study for examining the impacts of open practices since engineering touches every aspect of human life. Engineering research is inherent to the development of goods and products such as medical devices and pharmaceuticals, so issues around the protection of intellectual property and innovation draw stark contrasts, for some, with the tenets of open science. On the other hand, work being done in open-source software and open-source hardware can enable us to engineer the tools of modern scientific discovery, greatly reducing the costs of scientific research1.\n\nIn a time of constrained university budgets, which are not expected to improve as long as most public universities rely heavily on state funding, many universities are being forced to evaluate their institutional priorities2. For some, particularly state universities subject to the whims of state legislation, this could mean abandoning the pursuit of fundamental or basic knowledge generation in favor of more marketable vocational training models that cater more directly to industry needs. While this model is in line with the Morrill Act of 1862, which underpins the missions of many US institutions, the university has evolved since that time to encompass a much greater proportion of the economic development of the country2. Despite the challenges faced by institutions today, it is critical for the university to continue to position itself as a center of societal development—economically, technologically, and socially. Additionally, the university should push this model further towards positioning itself as the main driver of social and technological innovation. To achieve this, it is necessary to position and market the business of the university, as clearly as possible, as a service provider to many relevant stakeholders. This can be best accomplished by disseminating and distributing the products of university activities as widely as possible through open access publishing, open research, and open innovation, and further demonstrating the impact that these products have on local, state, national, and international populations. As stated by Ashley Farley of the Gates Foundation, “Open research should be the norm. Knowledge should be a public good”3.\n\n\nImportance of open science\n\nConducting open research is an act of assigning value to the work to which you are passionately committed. This includes all of the final, polished products of that work—including papers, software, and data—as well as the half-baked ideas, the napkin sketches, the first drafts, and the failures. The dissemination of these artifacts may on occasion comprise an act of humility, but ultimately it recognizes that each of these items is a piece of the research process and that even your failures have value in the lessons you learned and can be passed on to others. Transforming research communities from traditional, closed environments to open ones is important for a number of reasons, including (but not necessarily limited to) the following six. McKiernan et al.4 discuss these and additional benefits for researchers working openly. Tennant et al.5 review in detail the benefits of open-access publications to academics and society.\n\nAccessibility: Openness in research ensures that research products, particularly written output, remain accessible to all. This includes the research community, funders, policy makers, and the general public. Accessibility of research products is particularly important for publicly funded research—since the public paid for the research, the public should have access to, and be able to benefit from, it. (This does not prevent innovators or other parties from developing commercial intellectual property based on the findings, but ensures that the original discovery, when funded by the public, remains accessible to all.) While the body of published work available freely through open access or online social networks has approached 50%, this percentage is notably lower in engineering at approximately 35%6. However, Piwowar et al.7 found that this percentage drops below 20% when not considering articles self-archived on author websites, which can lack assurance of long-term availability.\n\nReproducibility: Releasing products of research, including software and data, helps enable reproducibility. This is particularly true for computational science, where a written description of methods can never describe an approach as completely as the source code8. In general, access to research software used to perform a computational study, or the data from an experimental study, should enable others to reproduce the findings of the original researchers. However, open science is a necessary but not sufficient aspect of reproducibility, as it can be challenging to reproduce or replicate results even with available research software and data9,10.\n\nImpact: As a selfish motivation, performing research openly helps increase the impact of the work. Studies have shown that open-access papers are cited more in most research fields. In engineering, open-access papers are cited around 1.5 times more often than non-open-access papers6,11. Similarly, papers with associated open data were cited 9–50% more than those without4,12. Vandewalle13 showed that papers in the image-processing field receive up to three times the number of citations when source code is made available. We must note, however, that the concept of impact should not be solely regarded through the measure of citations. The true societal impact of the work is likely more important but also more difficult to quantify14.\n\nEstablish priority: Some researchers hesitate to embrace open science out of a fear of being “scooped,” where competitors will use some findings, software tools, or data made available and then publish first. However, contrary to this belief, practicing open science can actually prevent being scooped: releasing preprints can establish priority of discoveries or techniques prior to the publication of a traditional peer-reviewed journal article15,16.\n\nThe peer-review and editorial process of such papers can take many months or years, but journal articles are still necessary for research findings to be considered valid (and for researchers to receive credit). Publishing a preprint of an article publicly time-stamps the work, even as it undergoes peer review and possible revision.\n\nEncourages trust: Embracing openness in scientific research can help encourage other researchers to trust published results, by giving the ability to inspect data or software. Soergel17 estimated that 5–100% of computational results given by software may be incorrect or inaccurate. While simply releasing source code openly will not solve this problem, this is a necessary step towards verification and reproducibility.\n\nIt’s nice: In addition to the above benefits, sharing products of research openly is kind to colleagues and the greater research community, as it prevents people from wasting time by unnecessarily repeating work. For example, many graduate students begin working on their dissertation research by attempting to reimplement another group’s methods and reproduce some of their published results. However, lacking access to software source code or datasets can hinder this work. As a result, significant time can be wasted guessing about minor implementation details or inputs not discussed in the corresponding published papers. This can be avoided by sharing the source code and data, which would allow these junior researchers to more quickly move on to new work. Graduate students and other researchers constantly face similar challenges that could be avoided by greater openness in research.\n\n\nOpenness increases societal impact of research\n\nMany published journal articles go unread, even in their topical domains. One study of citation rates found that 27% of papers published in the natural sciences and engineering go uncited18 b. Those who do read most papers likely come from research institutions similar to those of the authors, even if the findings could be impactful beyond these confines, for example by leading to policy changes or technological solutions for humanitarian purposes. In part, this is due to the challenging technical content, jargon, and niche topics—but it is also due to a lack of access to the journals where most research findings reside. (Making the content of these papers actually understandable or digestible by most potential readers is another challenge.) Considering the high and ever-increasing cost of scholarly journal subscriptions, research results should not be limited to those with the means to purchase access. By self-archiving (i.e., green open access) or publishing articles in open-access journals, researchers can ensure access for all members of society, including policymakers, funders, members of the media, entrepreneurs, and the general public—as well as scientists and engineers in the Global South.\n\nFurthermore, being more open with all of the outputs of research (e.g., papers, software, data) could help improve the general public’s perception and trust in scientific research. Simply making research products available will not solve all of these problems—for one, it will not sway those who strongly believe ideas contrary to fact. However, ensuring everyone has access to the data researchers generate and analyze, and the software tools on which we rely, could eliminate one major barrier to trust in our findings19.\n\nLooking specifically at the field of engineering, we can also find examples of the positive effects of open knowledge dissemination. According to Chris Ategeka, founder of Health Access Corps, “Patenting a social-impact product hinders scale, ultimately obstructing the maximum impact that particular product would have in the world if it was open source”20. Thus, the clear benefit of using open research and development practices is achieving greater impact with your research products. The counter argument to this is that, through patenting, the entrepreneur can more easily market and sell their product in developed markets, which could then increase their ability to affect change by subsidizing their efforts in developing nations. This situation may hold true for products with broad appeal and therefore it is necessary for the inventor to assess which path will produce the greatest impact. Assuming, also, that we encourage and reward impact. It could be argued that in the majority of scenarios, open dissemination will yield greater impact through simplified adoption and adaptation by others—especially if the front-end development activities are incentivized in other ways.\n\n\nOpen science and the scholar’s research agenda\n\nFor the new researcher looking to build their profile and develop their research agenda, we present a vision and plan for performing research openly, synthesized from literature practices and advice. The ideas presented here are heavily inspired by examples from others active in this work, such as Lorena Barba’s Reproducibility PI Manifesto21, the Peer Reviewers’ Openness Initiative22, and others. While these exemplars provide useful case studies, it is important to emphasize that each individual must define for themselves a workflow that works for them. Sometimes it is enough to simply be more open than the current norms in their field.\n\nMany fields within engineering lack reputable open-access journals; indeed, only 17% of published manuscripts in engineering can be accessed by the public for free legally, while some sub-disciplines, such as chemical engineering, are lower at 9% available7. Thus, the engineering researcher looking to publish in open-access venues can quickly become discouraged. For the early career researcher looking to make their work available while operating within this research environment can take simple steps such as submitting preprints of any publications to the engrXivc, and deposit (otherwise non-accessible) conference papers or slide decks on Figshared.\n\nFor the researcher looking to develop their open workflow further, we recommend the following steps:\n\nMake all written research products openly accessible, either through green or gold open-access avenues. For fields which lack recognized, fully openaccess journals, this objective is typically met by submitting preprints to services such as arXiv, engrXiv, PeerJ Preprints, or Figshare, depending on the topic. Conference papers, when not submitted to an open venue, can also be made openly available. Where possible, release all preprints under the Creative Commons Attribution (CC BY) licensee. If funds are available through research grants or designated library OA funding, a researcher may choose to follow the hybrid gold open-access model by paying a non-fully open journal to make a paper accessible. Note, however, that the fees associate with hybrid open access journals are detrimental to researchers at smaller institutions23 and thus this model should not be viewed as the solution in keeping with the open-access movement.\n\nAny new research software developed should be done openly (e.g., on GitHub) and released publicly under a permissive license, such as the BSD 3-clause license. The Git version-control system (or equivalent) should be used to track the history of software projects, and software releases associated with publications or data should be archived (with DOIs) using Zenodof. In addition, implementation details should be described as thoroughly as necessary to reproduce the work.\n\nAll data generated through research, when serving as the basis for a publication, should be archived publicly and cited appropriately in manuscripts or other documents. This data may also include figures and the plotting scripts that produce them, which can then be shared under a CC BY license and cited where appropriate.\n\nAs a means of supporting these efforts, researchers should take care to implement these policy statements by incorporating them into funding proposals, for example in Data Management Plans. Note that policies where data and code are made “available upon request” are generally not sufficient for reproducibility24.\n\nSeveral community efforts have developed in recent years with the goal of defining and supporting open science practices. Some examples include the Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE)25 and the FORCE11 Software Citation Working Group which developed the Software Citation Principles26 with the goal of standardizing software citation to help ensure authors/developers receive academic credit for their work in releasing open research software. On the publishing side, community driven research journals have been built to promote open publishing practices. Some examples include the Journal of Open Source Softwareg 27 and the Journal of Open Engineeringh 28. Similarly, engrXiv, an open archive for engineering publications, has been developed to serve the engineering community, inspired by the success of the arXiv.\n\n\nChallenges to performing open science\n\nThe primary challenges facing those individuals interested in conducting open research generally involve incentives (or the lack thereof) and restrictive policies maintained by traditional publishers, in addition to the lack of a culture of sharing within the researcher’s disciplinary field. First, researchers are often pressured to carefully consider the venue in which they publish their work and to select only those that are “well established” and “high impact.” However, if these venues are not amenable to open research activities such as the posting of preprints, these challenges disincentivize those activities. To remedy this, the research community must continue to pressure publishers to modify their copyright transfer policies. Some progress has already been made in this effort through policies from funding sources such as the National Institutes of Healthi, the National Science Foundationj, the Bill & Melinda Gates Foundationk, and the Wellcome Trustl or from research institutions who require deposition in a repository. More information on these policies can be found on the Registry of Open Access Repository Mandates and Policiesm. Additionally, authors themselves can in some cases work with publishers to modify the standard publisher copyright transfer agreements allowing the author to retain more rightsn.\n\nAdditionally, promotion and tenure requirements typically focus exclusively on the final published manuscript and associated metrics, neglecting other research outputs such as code, data, solid models, etc., and their associated impacts. Some institutions actively discourage making these alternative research products available due to idealistic dreams of future income generation from licensing revenues. However, in reality, the majority of universities lose money through their technology commercialization offices, since translation of university intellectual property to commercial success is generally poorly realized29,30. Some institutions are instead pursuing alternatives such as EasyAccessIPo which promotes universal knowledge dissemination as a mechanism “to create impact from university research outcomes as opposed to monetary aims.” Ultimately it is likely that societal pressure is necessary to push more institutions to participate in such initiatives. For that to happen, the public first needs to be aware of the possible benefits of broad knowledge dissemination and needs to experience those benefits first hand. Researchers may even see benefits in terms of their scholarly productivity, as Frankenhuis and Nettle31 argue that open-science practices may actually increase creativity and researcher output.\n\nIt seems the challenges impeding greater adoption of open-science practices are mainly institutional and cultural, rather than technical. General venues for sharing and developing the products of research openly abound these days, with the availability of services like arXiv, engrXiv, and PeerJ Preprints for ensuring open access of publications; repositories like GitHub for developing (and version-controlling) research software openly; and data and software archives like Zenodo and Figshare, which practically have no file size limitationsp. Of course, some technical problems remain: How do we make results of computational science, particularly when it involves demanding high-performance computing resources, truly reproducible? How can we cite software and data consistently, when the version might change regularly? How can open practices be integrated into a researcher’s workflow without further straining the researcher’s already overburdened time?\n\nAs cultural inertia and lack of institutional recognition/ rewards pose significant challenges to increased openness in science, the biggest barrier to greater openness in research may be ambivalence or outright hostility in many research communities. Many academic researchers either disagree on or are unaware of the importance (and benefits) of working openly. Since they were not trained in doing this, e.g., during graduate school or during postdoctoral training, they also may simply be unaware of how to research openly, or the resources that are available to do so. Furthermore, since most of their colleagues, collaborators, and competitors do not practice open science, no pressure comes from the research community to change. In addition, some communities do not support or actively oppose activities such as submitting preprints.\n\nThis lack of pressure is related to the other major issue: lack of institutional recognition and reward for open practices. In general, academic researchers will work on what gets them credit for promotion and tenure—anything beyond that requires strong intrinsic motivation, or external motivators from the research community. At most institutions, promotion and tenure review includes some judgment (whether explicit or implicit) of where faculty publish their work, but many, “high-impact”’ traditional publication venues—particularly domain journals—may not support, e.g., the posting of preprints.\n\n\nRecommendations for university leaders\n\nAs already discussed, there are real career advantages to open-access publishing and open dissemination of data, code, or other research products and therefore, for some, the incentives to conduct open research may already be in place. However, for many, citation metrics alone are not enough to ensure success in promotion and tenure, and therefore they must play to the norms of their field, department, and institution. Therefore, the institution (and the department) should be looking to institute policy that redefines how we measure success in academic engineering research. Some suggestions include focusing less on journal-level metrics and lending greater credibility to article-level metrics. For article-level metrics, go beyond the citation count and look for other evidence of research impact such as alternative metrics (tweets, blog posts, media coverage) and replication by others. Lastly, look for evidence of broader implications such as economic development, student development, or even lives saved. Encourage your researchers to aim for those broader impacts and value them greater than the publishing of one more paper.\n\nThinking about what institutions can do to promote open engineering research and create support structures around open dissemination, we provide the following recommendations:\n\nRequire research products to be made openly available and then support this requirement by having a high-quality institutional repository, supporting other open repositories, and lobbying publishers to modify their copyright policies to promote the publishing of preprints and other products prior to journal submission as well as archiving of final version manuscripts.\n\nConvert technology commercialization offices into research impact offices. Use these offices as a mechanism for helping researchers broaden their impact through open research best practices, for funding social entrepreneurship, and for advocating these institutional activities at the state, national, and international levels.\n\nEmpower and fund our university libraries to help with open knowledge dissemination. Others have described ways in which research outputs can be pushed public in real time with the support of the library32, institutions should promote and support these efforts.\n\nEducate our undergraduate and graduate students on the importance of open knowledge dissemination and the practices that support it. Create and sponsor workshops that train participants in open source software development, open research dissemination, and global development. Many institutions embrace service learning as a mechanism for greater civic engagement33—broaden this approach in a thoughtful and impactful manner, being careful to ensure that students are learning the right lessons and that partnering communities are not unduly burdened34. These approaches can help ensure that young engineers remain passionate about the field and hold onto the core societal mission of engineering35.\n\nThinking specifically about the perspective of the researcher within an institution, the following list of recommendations for departments are mostly targeted at changing criteria for promotion and tenure, and performance reviews, to encourage faculty to practice more open science:\n\nConsider accessibility/openness of research products along with quantity and “quality” in promotion and tenure review. Mandate self-archiving of publications (i.e., green open access).\n\nRecognize research products such as software and data, and their associated impacts (e.g., citations), as equal to traditional publications in scholarly impact.\n\nReduce the importance of publishing in traditional venues for promotion and tenure, recognizing these may be barriers to openness.\n\nProvide educational opportunities that train faculty and other researchers in open science skills, and those necessary to work with software and data.\n\nResearch communities that impede openness cannot be forced to change from the outside. Instead, by making changes to institutional reward systems, researchers will be encouraged to improve their open practices, and thus evolve communities from the inside.\n\n\nConclusions\n\nIn this paper we have reviewed the existing state of knowledge on the benefits and challenges of practicing openness in engineering research. We have further briefly outlined our thoughts on how open research practices in the sciences, engineering, and other fields can and should be employed by public universities to position themselves as centers for the creation and broad dissemination of knowledge as a public resource. The opposition to this proposal is immense, particularly in a political climate that devalues an educated populace and with systemic practices and policies that exclusively reward the monetization of any form of intellectual property. Change likely needs to be driven with grassroots initiatives that demonstrate the possible benefits and make it clear that tax dollars could fund these efforts if distributed properly and with accountability.\n\n\nData availability\n\nNo data are associated with this article.\n\n\nNotes\n\na“If I have seen further, it is by standing on the shoulders of giants.” Isaac Newton (1676), although similar statements can be found as far back as the 12th century.\n\nbOf course, papers that are read may not be cited, and papers that are cited may not actually be read.\n\nchttps://engrxiv.org\n\ndhttps://figshare.com\n\nehttps://creativecommons.org/licenses/by/4.0/legalcode\n\nfhttps://zenodo.org\n\nghttp://joss.theoj.org/\n\nhhttps://www.tjoe.org/\n\niNIH Public Access Policy https://publicaccess.nih.gov/index.htm\n\njNSF Public Access Plan https://www.nsf.gov/pubs/2015/nsf15052/nsf15052.pdf\n\nkBill & Melinda Gates Foundation Open Access Policy http://www.gatesfoundation.org/How-We-Work/General-Information/Open-Access-Policy\n\nlWellcome Trust Open Access Policy https://wellcome.ac.uk/funding/managing-grant/open-access-policy\n\nmROARMAP http://roarmap.eprints.org/\n\nnSPARC Author Addendum https://sparcopen.org/our-work/author-rights/#addendum\n\nohttp://easyaccessip.com/\n\npZenodo currently accepts datasets up to 50GB, but stores data in the CERN Data Center, along with 100PB of physics data from the Large Hadron Collider (https://zenodo.org/faq).",
"appendix": "Competing interests\n\n\n\nDRB is the founder of engrXiv and the Journal of Open Engineering; KEN is on the Editorial Board of the Journal of Open Source Software.\n\n\nGrant information\n\nThis material is based upon work supported by the National Science Foundation under grant no. 1733968.\n\n\nReferences\n\nPearce JM: Materials science. Building research equipment with free, open-source hardware. Science. 2012; 337(6100): 1303–1304. ISSN 0036-8075, 1095-9203. PubMed Abstract | Publisher Full Text\n\nPublic Research Universities: Public Research Universities: Recommitting to Lincoln’s Vision—An Educational Compact for the 21st Century. American Academy of Arts & Sciences, Cambridge, MA. 2016; ISBN 0-87724-109-0. Reference Source\n\nTennant J: Ashley Farley of the Gates Foundation: “Knowledge should be a public good.”. 2017. Reference Source\n\nMcKiernan EC, Bourne PE, Brown CT, et al.: How open science helps researchers succeed. eLife. 2016; 5: pii: e16800. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTennant JP, Waldner F, Jacques DC, et al.: The academic, economic and societal impacts of Open Access: an evidence-based review [version 3; referees: 4 approved, 1 approved with reservations]. F1000Res. 2016; 5: 632. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArchambault E, Amyot D, Deschamps P, et al.: Proportion of open access papers published in peer-reviewed journals at the european and world levels—1996–2013. 2014. European Commission. Reference Source\n\nPiwowar H, Priem J, Larivière V, et al.: The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles. PeerJ. 2018; 6: e4375. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBuckheit JB, Donoho DL: WaveLab and Reproducible Research. Springer New York, New York, NY, 1995; 55–81. ISBN 978-1-4612-2544-7. Publisher Full Text\n\nMesnard O, Barba LA: Reproducible and replicable CFD: it’s harder than you think. 2016. [physics.comp-ph]. Reference Source\n\nBarba LA: The hard road to reproducibility. Science. 2016; 354(6308): 142. PubMed Abstract | Publisher Full Text\n\nAntelman K: Do open-access articles have a greater research impact? Coll Res Libr. 2004; 65(5): 372–382. Publisher Full Text\n\nPiwowar HA, Vision TJ: Data reuse and the open data citation advantage. PeerJ. 2013; 1(3): e175. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVandewalle P: Code sharing is associated with research impact in image processing. Comput Sci Eng. 2012; 14(4): 42–47. Publisher Full Text\n\nHowe A, Howe M, Kaleita AL, et al.: Imagining tomorrow's university in an era of open science [version 2; referees: 3 approved]. F1000Res. 2017; 6: 405. ISSN 2046-1402. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerg JM, Bhalla N, Bourne PE, et al.: SCIENTIFIC COMMUNITY. Preprints for the life sciences. Science. 2016; 352(6288): 899–901. PubMed Abstract | Publisher Full Text\n\nStrasser C: Preprints: The bigger picture. The Winnower. 2016; 3: e146955.56313. Publisher Full Text\n\nSoergel DA: Rampant software errors may undermine scientific results [version 2; referees: 2 approved]. F1000Res. 2015; 3: 303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLarivière V, Gingras Y, Archambault É: The decline in the concentration of citations, 1900–2007. J Am Soc Inf Sci Tec. 2009; 60(4): 858–862. Publisher Full Text\n\nGrand A, Wilkinson C, Bultitude K, et al.: Open science: A new ‘trust technology’? Sci Commun. 2012; 34(5): 679–689. Publisher Full Text\n\nGoodier R: The case for open source design in low-cost medical patient transport. 2016. Reference Source\n\nBarba LA: Reproducibility PI manifesto. 2012. Publisher Full Text\n\nMorey RD, Chambers CD, Etchells PJ, et al.: The peer reviewers’ openness initiative: incentivizing open research practices through peer review. R Soc Open Sci. 2016; 3(1): 150547PubMed Abstract | Publisher Full Text | Free Full Text\n\nSiler K, Haustein S, Smith E, et al.: Authorial and institutional stratification in open access publishing: the case of global health research. PeerJ. 2018; 6: e4269. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStodden V, Seiler J, Ma Z: An empirical analysis of journal policy effectiveness for computational reproducibility. Proc Natl Acad Sci U S A. 2018; 115(11): 2584–2589. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatz DS, Choi SCT, Niemeyer KE, et al.: Report on the third workshop on sustainable software for science: Practice and experiences (WSSSPE3). J Open Res Softw. 2016; 4(1): e37. Publisher Full Text\n\nSmith AM, Katz DS, Niemeyer KE, et al.: Software citation principles. PeerJ Comput Sci. 2016; 2: e86. Publisher Full Text\n\nSmith AM, Niemeyer KE, Katz DS, et al.: Journal of open source software (JOSS): design and first-year review. PeerJ Comput Sci. 2018; 4: e147. Publisher Full Text\n\nBerg D, Niemeyer KE, Fleischfresser L: [Editorial] Open publishing in engineering. J Open Eng. 2016. Publisher Full Text\n\nValdivia WD: University start-ups: Critical for improving technology transfer. Technical report, Center for Technology Innovation at Brookings. 2013. Reference Source\n\nSanami M, Flood T, Hall R, et al.: Translating healthcare innovation from academia to industry. Adv Mech Eng. 2017; 9(3). ISSN 1687-8140. Publisher Full Text\n\nFrankenhuis W, Nettle D: Open Science is Liberating and Can Foster Creativity. Open Science Framework. 2018. Publisher Full Text\n\nBrembs B: Open Science: Too much talk, too little action. 2017. Reference Source\n\nBringle RG, Hatcher JA: Innovative practices in service-learning and curricular engagement. New Dir Higher Educ. 2009; 2009(147): 37–46. Publisher Full Text\n\nBerg DR, Lee T, Buchanan E: A methodology for exploring, documenting, and improving humanitarian service learning in the university. Journal of Humanitarian Engineering. 2016; 4(1). Publisher Full Text\n\nEseonu C, Cortes MA: Engineering for Good: A Case of Community Driven Engineering Innovation. Journal of Humanitarian Engineering. 2018; ISSN 2200-4904. Reference Source"
}
|
[
{
"id": "33494",
"date": "30 Apr 2018",
"name": "Nathan L. Vanderford",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well-written opinion article that presents a well-articulated argument for the science community to increase the practice of “openness” in research. After clearly defining “open” research (caveat below), key components of the article include a discussion on the importance of open research to science, some recommendations on how researchers can conduct their research “openly” and how universities can support such work, and there is a discussion about challenges to open science. The article is a valuable contribution to this growing movement.\nAlthough the article is well written and the argument is clearly articulated with appropriate references to the literature, I believe that there are a few points that the authors should consider and respond to prior to this work being approved for indexing.\nThe authors reference the work as a “review” but in the parlance of F1000Research, it is an opinion article and it is written as such since it is an argument for conducting more open research. Thus, this reviewer suggests not referring to the work as a “review.”\n\nThe article title and a few areas within the text are written specifically toward engineering research, but this topic transcends disciplines and much of the content can be applied broadly. As such, the authors should consider whether the title and respective text should be less restricted to engineering. Perhaps the authors could present the topic as one that is more broadly applicable but then more clearly state that they are using engineering as a case study/example to illustrate their points.\n\nThe authors begin the main text with a clear definition of open science, but it is unclear if this is the authors’ definition or if it is taken from the literature. This should be clarified.\n\nThe authors should consider adding a bit more discussion on the challenges associated with open research. For example, more could be said about how the culture of academia doesn’t particularly value open research currently. Today’s academic culture is still too focused on where a journal is ranked based on impact factor and this plays into aspects of faculty life such as faculty hiring, promotion/tenure, faculty performance evaluations, post-tenure review, etc. While the authors touch on some of these points, a greater discussion and recommendation for a culture change could be a valuable, thought-provoking addition to the article. More could also be said regarding some of the minutia of how open research needs to be implemented especially related to the infrastructure needed to support it and the financial costs associated with the infrastructure. Who should pay for hosting, pay publications costs, etc.?\n\nIn summary, this is a well-articulated article on an important, timely topic. The few points above slightly dampen this reviewer’s enthusiasm at this time related to giving full approval. As such, I look forward to reviewing a revised version of the article.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "4035",
"date": "11 Oct 2018",
"name": "Devin Berg",
"role": "Author Response",
"response": "Dr. Vanderford,Thank you for your thoughtful review of our paper and for your specific suggestions for improvement. We've attempted to address your comments in this revision as outlined below.1. The authors reference the work as a “review” but in the parlance of F1000Research, it is an opinion article and it is written as such since it is an argument for conducting more open research. Thus, this reviewer suggests not referring to the work as a “review.” We've revised the language to make it more clear that this is not a review article and more of an opinion article.2) The article title and a few areas within the text are written specifically toward engineering research, but this topic transcends disciplines and much of the content can be applied broadly. As such, the authors should consider whether the title and respective text should be less restricted to engineering. Perhaps the authors could present the topic as one that is more broadly applicable but then more clearly state that they are using engineering as a case study/example to illustrate their points.We agree that many of the issues that we've addressed are not unique to engineering and do indeed apply more broadly. However, our goal with this work was to write a targeted paper for the engineering community that specifically addresses the issues from that frame. There are other works in the literature that take a more general approach and we've tried to reference them appropriately to bring them to the reader's attention.3) The authors begin the main text with a clear definition of open science, but it is unclear if this is the authors’ definition or if it is taken from the literature. This should be clarified.The definition of open science included in our paper is a synthesis of other available definitions found elsewhere. We've edited to make this more clear and inserted some references to support this.4) The authors should consider adding a bit more discussion on the challenges associated with open research. For example, more could be said about how the culture of academia doesn’t particularly value open research currently. Today’s academic culture is still too focused on where a journal is ranked based on impact factor and this plays into aspects of faculty life such as faculty hiring, promotion/tenure, faculty performance evaluations, post-tenure review, etc. While the authors touch on some of these points, a greater discussion and recommendation for a culture change could be a valuable, thought-provoking addition to the article. More could also be said regarding some of the minutia of how open research needs to be implemented especially related to the infrastructure needed to support it and the financial costs associated with the infrastructure. Who should pay for hosting, pay publications costs, etc.?We have addressed issues around a lack of valuation for open science practices and the need for change in promotion/tenure evaluation criteria. We've tried to emphasize this more in the paper. We would also point out that many of our recommendations do not require financial support to be feasible and indeed would be possible under existing funding models."
}
]
},
{
"id": "33671",
"date": "04 May 2018",
"name": "Joshua M. Pearce",
"expertise": [
"Reviewer Expertise open hardware",
"solar photovoltaics",
"sustainability",
"energy policy"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an important paper - and sadly still needs to be written in 2018 - as open science should clearly be the default for the maximum rate of progress in any field. It is clear that we are headed this way but the rate of change could certainly be faster. In addition to the comments from the other reviewers I offer the following constructive points:\nAlthough this is an opinion piece I would encourage the authors to avoid all unsubstantiated claims. Ideally every fact not derived from the authors' own work should have a citation. Although I agree with the first line - the article would be stronger if the footnote offered either a substantial list of referenced arguments for it -- or wait until the end to make the claim. Instead of simply saying open source software - it would be better to use FOSS (and explain the difference). To be correct in the definition of open science - the use of free and open source hardware should be included in addition to FOSS. I am not sure that open science is benefited by \"the half-baked ideas, the napkin sketches, the first drafts, and the failures.\" For example, engineers working on water purification technology should not publish their tech until they are sure it works unless it has warning notices all over it. It would be interesting to speculate on why engineers lag so far behind say physicists in making their work open access. Is it because the various engineering societies have more restrictive publishing agreements than the major publishers? There are some examples of method to quantify the impact of research on society. There is a rich literature showing high ROIs for industry funded research for the business world. In addition using the concept of downloaded substitution value one can calculate the value to society for open source scientific hardware designs as well as software. Increasing societal impact of research + you make a good point about the scientists in the Global South - but you should consider going one step further and encouraging engineers working on technologies that can solve the problems of the world's poorest people to make sure they are released as open source appropriate technologies. Impact - you can make a stronger case. Patenting slows innovation (and there are a ton of studies showing this) and increases costs for consumers. One of the most clear recent examples is the staggering decrease in costs and increases in performance of 3D printers following the open source release of the RepRap project. Engineering journals + most of them do offer some sort of open access policy - either to post preprints or pay for open access. Open workflow should encourage the use of FOSS and FOSH whenever possible. pg 5 seems overly pessimistic and also does not cite proof. I have found that most researchers are sympathetic to open science and that many forward-thinking institutions are pushing open access pretty hard. Even just going to article level metrics - where OA has an advantage for citations should be useful and already has a built in incentive. Something should be said about open source business models. Patents are not the only way to go -- i.e. RedHat is a multi-billion per year open source company. Finally, many of your suggestions are good but could be strengthened if you hone in on university leaders self interest for encouraging them to actually implement them.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "4042",
"date": "11 Oct 2018",
"name": "Devin Berg",
"role": "Author Response",
"response": "Dr. Pearce,Thank you for your thorough review. We agree that open science practices should indeed be the default for engineering research and progress. We'd like to address your individual points below.1) Although this is an opinion piece I would encourage the authors to avoid all unsubstantiated claims. Ideally every fact not derived from the authors' own work should have a citation.We have gone through the paper again and attempted to address any issues with uncited statements. Several new citations and references have been added.2) Although I agree with the first line - the article would be stronger if the footnote offered either a substantial list of referenced arguments for it -- or wait until the end to make the claim.The first line is stated as an opinion, which we intend to use as the basis for the rest of the paper. This statement is fleshed out more fully with references to other related works in the following paragraphs.3) Instead of simply saying open source software - it would be better to use FOSS (and explain the difference).We have added language to make this distinction.4) To be correct in the definition of open science - the use of free and open source hardware should be included in addition to FOSS.Agreed. We have added hardware to this definition.5) I am not sure that open science is benefited by \"the half-baked ideas, the napkin sketches, the first drafts, and the failures.\" For example, engineers working on water purification technology should not publish their tech until they are sure it works unless it has warning notices all over it.We agree with the first sentence here and have modified the language to word this differently. For the second sentence, we agree that there needs to be more education on what early work is and its use cases. However, we don't feel that this is limited to any one sub-discipline such as \"water purification technology\" and is actually an issue that applies to virtually all of engineering.6) It would be interesting to speculate on why engineers lag so far behind say physicists in making their work open access. Is it because the various engineering societies have more restrictive publishing agreements than the major publishers?This is a difficult question to answer. Perhaps there are cultural limitations due to the applied or industry connected nature of engineering. We've added some discussion to the \"Challenges\" section to try to address this better.7) There are some examples of method to quantify the impact of research on society. There is a rich literature showing high ROIs for industry funded research for the business world. In addition using the concept of downloaded substitution value one can calculate the value to society for open source scientific hardware designs as well as software.We have added some discussion and relevant citations to the societal impact section of the paper.8) Increasing societal impact of research + you make a good point about the scientists in the Global South - but you should consider going one step further and encouraging engineers working on technologies that can solve the problems of the world's poorest people to make sure they are released as open source appropriate technologies.Agree and we have added some additional discussion to this section of the paper.9) Impact - you can make a stronger case. Patenting slows innovation (and there are a ton of studies showing this) and increases costs for consumers. One of the most clear recent examples is the staggering decrease in costs and increases in performance of 3D printers following the open source release of the RepRap project.Have added some discussion and references to support this.10) Engineering journals + most of them do offer some sort of open access policy - either to post preprints or pay for open access.We have to generally disagree with this point. Several of the larger engineering societies are lagging in explicit support of preprinting. Additionally, we don't view hybrid OA journals as a solution to this problem as they are largely shifting the fees and associated publisher profits from one hand to the other. We view hybrid OA journals as a temporary solution on the path towards full OA journals with reasonable APCs operated by societies or other non-profit academic institutions.11) Open workflow should encourage the use of FOSS and FOSH whenever possible.Have edited the language to match.12) pg 5 seems overly pessimistic and also does not cite proof. I have found that most researchers are sympathetic to open science and that many forward-thinking institutions are pushing open access pretty hard.We have edited the language here. Also, it seems that these forward-thinking institutions are pushing OA publishing but not open science.13) Even just going to article level metrics - where OA has an advantage for citations should be useful and already has a built in incentive.True, we have reiterated this point.14) Something should be said about open source business models. Patents are not the only way to go, RedHat is a multi-billion per year open source company.This is true, though we've tried to focus this paper more on research than industry/innovation.15) Finally, many of your suggestions are good but could be strengthened if you hone in on university leaders self interest for encouraging them to actually implement them.Have edited language to reiterate the benefits of self-interest."
}
]
}
] | 1
|
https://f1000research.com/articles/7-501
|
https://f1000research.com/articles/7-1627/v1
|
11 Oct 18
|
{
"type": "Systematic Review",
"title": "Effectiveness of triple antibiotic paste as an intra-canal medication for the root canal treatment of non-vital teeth with apical periodontitis: A systematic review",
"authors": [
"Ehab Abdel Hamid",
"Saied Abdel Aziz",
"Hany Samy Sadek",
"Ahmed Mohamed Ibrahim",
"Saied Abdel Aziz",
"Hany Samy Sadek",
"Ahmed Mohamed Ibrahim"
],
"abstract": "Background: This is a systematic review to assess and provide a pooled effect estimate, if possible, for the effects of triple antibiotic paste as an intra-canal medication for root canal treatment of mature permanent non-vital teeth with apical periodontitis. This review will assess post-operative pain, flare-up incidence, and clinical and radiographic healing. Methods: Nine electronic databases (Pubmed, CENTRAL, VHL, Scopus, EBSCOhost, Web of Science, Trip, OpenGrey, Proquest) were searched along with two major clinical trial registries. Conference proceedings, reference lists and citations of the included studies were also searched. A total of 537 records were identified and 392 were obtained after duplicate removal. Six records were identified after screening and three studies were included after full text eligibility assessment. Results: Three comparators were reported in the included studies: calcium hydroxide paste, 2% chlorhexidine gel and ledermix paste. There was no statistically significant difference between triple antibiotic paste and calcium hydroxide regarding postoperative pain, and clinical and radiographic healing of periapical lesions. There was no difference between triple antibiotic paste and chlorhexidine regarding flare-up incidence. However, triple antibiotic paste reduced the level of post-operative pain more than ledermix, which was statistically significant. Conclusions: The evidence is still insufficient surrounding the use of triple antibiotic paste; therefore more clinical investigations with high levels of evidence and rigorous methodologies are needed.",
"keywords": [
"Root canal therapy",
"postoperative pain",
"nonvital tooth"
],
"content": "Introduction\n\nPulpal tissue infection initiates inflammation of periapical tissues and results in apical periodontitis1. Root canal treatment aims to prevent or manage apical periodontitis by decreasing the intra-canal microbial load2. The anatomy of the root canal system makes it almost impossible to completely eliminate the bacteria using conventional mechanical and chemical techniques, even with the highest technical standards3. Therefore, an effective intra-canal medication in the root canal is required to kill any remaining bacteria4, thereby reducing postoperative pain and inducing periapical healing5. Calcium hydroxide has been considered the gold standard for optimally disinfecting root canals; however, it had been reported that Enterococcus faecalis the dominant bacteria in resistant endodontic infections, is resistant to calcium hydroxide6. Recently, triple antibiotic paste (a mixture of ciprofloxacin, metronidazole and minocycline) has been used as an intra-canal medication for root canal disinfection7. It had been shown that triple antibiotic paste could kill any remaining bacteria in the root canal system8.\n\nTo the best of our knowledge, the effectiveness of triple antibiotic paste as an intra-canal medication in the treatment of non-vital permanent teeth with apical periodontitis hasn’t yet been subjected to systematic review. Thus, the purpose of this study is to systematically review and provide a pooled effect estimate, if possible, for the effects of triple antibiotic paste as an intra-canal medication for root canal treatment of mature permanent non-vital teeth with apical periodontitis. This review will assess post-operative pain, flare-up incidence, and clinical and radiographic healing.\n\n\nMethods\n\nThis systematic review was reported according to PRISMA Statement9. Supplementary File 1 contains the completed PRISMA checklist. The protocol was registered in PROSPERO database, registration number: CRD42018106518.\n\nPopulation: adult patients who had non-vital permanent mature teeth with apical periodontitis or previously root canal-treated teeth with apical periodontitis, undergoing root canal treatment in multiple visits. Primary teeth, immature, vital teeth and single visit root canal treatment were excluded.\n\nIntervention: triple antibiotic paste, which is a mixture of ciprofloxacin, metronidazole and minocycline.\n\nComparators: placebo, no intra-canal medication or any other intra-canal medication other than the intervention.\n\nOutcome measures: primary outcomes were post-operative pain and flare-up incidence after the first visit as defined by the trial authors. Secondary outcome was clinical and radiographic healing as defined by the trial authors with at least one year follow-up.\n\nStudy designs: randomized and quasi randomized controlled clinical trials were included. Non-randomized clinical trials, observational, in vitro or animal studies were excluded.\n\nOther: there were no restrictions on language, timing or settings.\n\nThe following electronic databases were searched: CENTRAL, Medline via Pubmed, Virtual Health Library, Trip, Scopus, Web of Science, EBSCOhost, OpenGrey, ProQuest thesis and dissertation. Ongoing clinical trial registries were searched: ICTRP and ClinicalTrials.gov. The search was conducted until 5/7/2018. Other sources included searching reference lists of included studies and relevant systematic reviews10,11, citation searching of included studies done in Google Scholar and searching conference proceedings of International Association of Dental Research (IADR), European Society of Endodontology (ESE) and American Association of Endodontists (AAE).\n\nSearch strategy was conducted using free text terms and controlled terms (MeSH) regarding the population and intervention. A sensitivity and precision filter maximizing randomized clinical trials was used in PubMed to help identify randomized clinical trials as recommended by Cochrane handbook12. The full search strategy for CENTRAL database is shown in Table 1.\n\nAfter searching the electronic sources 534 records were identified, and an additional three records were identified through searching other sources. After duplicate removal by Endnote X7 reference manager software, 392 records were identified. Two independent reviewers (EAH, AMI) screened the search results by title and abstract then by full text assessment to determine included studies.\n\nA data extraction sheet was written according to the main data extraction items recommended by the Cochrane handbook12.\n\nThe following items were extracted from each included study: methods - study design, setting and country; participants - selection criteria, tooth type, tooth condition, diagnostic criteria, gender/age, number randomized/analyzed and unit of randomization/analysis; interventions - groups, cleaning and shaping technique, irrigation method, intra-canal medication placement technique and period; outcomes - outcome domain, outcome measurement, time points and the outcome assessor.\n\nTo assess the risk of bias in the included randomized clinical trials, the revised RoB 2.0 domain-based tool was used13 to assess the risk of bias in the included randomized clinical trials. Studies were judged to be of “low risk”, “some concerns of risk” or “high risk” based on these domains: bias arising from the randomization process, bias due to deviation from intended interventions, bias due to missing outcome data, bias in measurement of the outcome and bias in selection of the reported results. The individually randomized, parallel group trials template of the RoB 2.0 tool was used on the outcome level for each study.\n\nSingle tooth or a patient with a single tooth was chosen as a unit of analysis. For dichotomous outcome, risk ratio and its 95% CI was used as a measure for the effect size. For the continuous outcome, mean difference and its 95% CI was used as a measure for the effect size. Meta-analysis was not possible due to studies assessing different outcomes and could not be combined. A qualitative synthesis was done instead.\n\n\nResults\n\nAfter screening 392 records by title and abstract; 386 records were irrelevant, Six records were identified for full text eligibility assessment: one study14 was excluded due to using another combination of antibiotics (metronidazole, ciprofloxacin and clindamycin); two studies were awaiting assessment due to unavailable full text15 and no separate results for non-vital teeth16 (the authors of these previous two studies were contacted to obtain the missing data with no response). Therefore, three studies17–19 were included in this systematic review (Figure 1).\n\nCharacteristics of the included studies are presented in Table 2. All the included studies were randomized parallel multi-arm clinical trials conducted in a single center. The settings of the included studies were done in university or dental college hospitals. There were two studies17,18 in India and one study in Turkey19. A total of 171 patients with 174 teeth were enrolled in the three included studies and 167 teeth were analyzed. All types of teeth were included, either single or multiple rooted teeth.\n\nRCT, randomized clinical trial; Ca (OH)2, calcium hydroxide paste; TAP, triple antibiotic paste; PAD, Photo activated disinfection; CHX, chlorhexidine; ICM, intra-canal medication; VRS, verbal rating scale; VAS, visual analogue scale.\n\nThe intervention of interest was a combination of three antibiotics (metronidazole, ciprofloxacin and minocycline) mixed with inert vehicles to form a paste. In the included studies, three comparators were reported - either calcium hydroxide paste, 2% chlorhexidine gel or ledermix paste.\n\nJohns et al. 201417 evaluated clinical and radiographic healing of periapical lesions in which treatment success was based on either strict criteria (absence of clinical signs and symptoms with complete radiographic healing) or loose criteria (absence of clinical signs and symptoms with complete healing or reduction of lesion size). Uyan et al. 201819 evaluated postoperative pain after root canal retreatment. Sinhal et al. 201718 evaluated incidence of interappointment flare up in diabetic patients in which flare-up incidence was defined as score 4 and 5 of the verbal rating scale.\n\nRisk of bias summary is presented in Table 3. Two studies17,18 had overall high risk of bias due to high risk of bias in domains regarding bias in selection of the reported results and bias in the measurement of the outcome. One study18 reported the results of flare-up incidence with no time points, while the other study17 reported only the results of clinical and radiographic healing for one time point and did not mention any data about the blinding of the outcome assessors.\n\nIt was not possible to combine the results of the studies in a meta-analysis, due to different outcomes being assessed with different time points and one study was reported in each outcome. A narrative synthesis was done for each study separately.\n\nTriple antibiotic paste VS calcium hydroxide. Regarding post-operative pain, Uyan et al. 201819 found that at six hours, the mean and standard deviation values for pain intensity were 24.44 ± 3.13 in the triple antibiotic paste group and were 19.77 ± 3.18 for calcium hydroxide (mean difference = 4.67, 95% CI 2.72-6.63). At 12 hours, the mean and standard deviation values for pain intensity were 40.15 ± 3.54 in the triple antibiotic paste group and were 28.38 ± 3.59 for calcium hydroxide (mean difference = 11.77, 95% CI 9.56-13.98). At 24 hours, the mean and standard deviation values for pain intensity were 44.95 ± 3.51 in the triple antibiotic paste group and were 36.92 ± 3.56 for calcium hydroxide (mean difference = 8.04, 95% CI 5.85-10.23). At 48 hours, the mean and standard deviation values for pain intensity were 36.82 ± 3.14 in the triple antibiotic paste group and were 38.75 ± 3.19 for calcium hydroxide (mean difference = -1.93, 95% CI -3.89-0.03) after first visit of treatment.\n\nCalcium hydroxide decreased the level of post-operative pain more than triple antibiotic paste in a statistically significant way at 6, 12 and 24 hours. There was no statistically significant difference at 48 hours.\n\nRegarding flare-up incidence, Sinhal et al. 201718 reported that 40% of patients in calcium hydroxide group experienced interappointment flare-up, with no flare-up seen in triple antibiotic paste group. However, the presented data in this study was insufficient to calculate the effect size and its 95% confidence intervals. Uyan et al. 201819 reported no flare-up incidence with either calcium hydroxide or triple antibiotic paste groups.\n\nRegarding clinical and radiographic healing, Johns et al. 201417 presented data only at 18 months follow-up. Based on the strict criteria of success, 13 out of 20 participants were healed in the triple antibiotic paste group and 7 out of 20 participants were healed in calcium hydroxide group (RR 1.86, 95% CI 0.94-3.66). Based on the loose criteria of success, 19 out of 20 participants were healed in triple antibiotic paste group and 17 out of 20 participants were healed in calcium hydroxide group (RR 1.12, 95% CI 0.91-1.38). There was no statistically significant difference between the two groups at 18 months follow-up based on either of the criteria of success.\n\nTriple antibiotic paste VS 2% chlorhexidine gel. Only flare-up incidence was evaluated by Sinhal et al. 201718, who reported no flare-up incidence in either chlorhexidine or triple antibiotic paste groups. However, the presented data in this study was insufficient to calculate the effect size and its 95% confidence intervals.\n\nTriple antibiotic paste VS ledermix paste. Regarding post-operative pain, Uyan et al. 201819 found that at six hours, the mean and standard deviation values for pain intensity were 24.44 ± 3.13 for triple antibiotic paste and were 54.89 ± 3.31 for ledermix (mean difference = -30.45, 95% CI -32.47 to -28.43). At 12 hours, the mean and standard deviation values for pain intensity were 40.15 ± 3.54 in the triple antibiotic paste group and were 64.27 ± 3.74 for ledermix (mean difference = -24.12, 95% CI -26.41 to -21.83). At 24 hours, the mean and standard deviation values for pain intensity were 44.95 ± 3.51 in the triple antibiotic paste group and were 61.77 ± 3.71 for ledermix (MD -16.82, 95% CI -19.09 to -14.55). At 48 hours, the mean and standard deviation values for pain intensity were 36.82 ± 3.14 in the triple antibiotic paste group and were 44.32 ± 3.32 for ledermix (mean difference = -7.50, 95% CI -9.53 to -5.47) after first visit of treatment. Triple antibiotic paste reduced the level of post-operative pain more than ledermix in a statistically significant way at all time points. Regarding flare-up incidence, Uyan et al. 201819 reported no flare-up incidence with either triple antibiotic paste or ledermix groups.\n\n\nDiscussion\n\nUse of antibiotics agents had been suggested by various authors for the eradication of bacteria associated with persistent endodontic infections20. Due to side effects of systemic application, and ineffectiveness in necrotic teeth, local application of antibiotics would be more effective in endodontics21. Research showed that a combination of metronidazole, ciprofloxacin and minocycline could destroy any bacteria in the infected root canal dentin and periapical lesions8. The purpose of this systematic review was to assess the effectiveness of triple antibiotic paste as an intra-canal medication for endodontic treatment of non-vital teeth with apical periodontitis in terms of post-operative pain, flare-up incidence, clinical and radiographic healing.\n\nThe search strategy in this systematic review was comprehensive to identify all published and unpublished articles. Seven electronic databases were searched along with two clinical trial registries. Also, grey literature, conference proceedings, backward and forward searches were included in the search strategy to identify all relevant articles. Risk of bias was assessed using the revised RoB 2.0 tool13, which had some advantages over the originally developed tool in the Cochrane handbook12. The RoB 2.0 tool assessed risk of bias on outcome level for each study and provided templates for different study designs at which the overall risk of bias assessment was easier to reach.\n\nRegarding post-operative pain, the evidence could be regarded as insufficient since only one study19 provided data for this outcome. Also, this study suffered from some limitations such as: adding new patients to replace the ones that were lost to follow-up, excluding patients from the analysis after treatment and unclear unit of randomization and analysis, since the number of teeth exceeded the number of patients with no mention if they were randomized an equal number of times. Therefore, this study had a unit of analysis issue22.\n\nRegarding flare-up incidence, the evidence could be regarded as insufficient since only one study18 that had high risk of bias reported data for this outcome, which was insufficient to calculate the effect size and its 95% confidence intervals.\n\nRegarding clinical and radiographic healing, the evidence could be regarded as insufficient since it was provided by one study17 that had high risk of bias.\n\nOne of the limitations of this review is the low numbers of studies that were identified despite the comprehensive search strategy conducted. Also, there were two studies awaiting assessment15,16, which, if they were available, might have changed the results of this review. Most of the studies suffered from high risk of bias so that the results of the included studies should be interpreted with caution.\n\nIt could be concluded that, the evidence of effectiveness of triple antibiotic paste is insufficient regarding post-operative pain, flare-up incidence, and clinical and radiographic healing. The number of randomized clinical studies assessing triple antibiotic paste effectiveness as an intra-canal medication is few. Consequently, we recommend conducting clearly reported, well designed, high quality, large randomized clinical trials to assess the effectiveness of triple antibiotic paste as an intra-canal medication in endodontic treatment of non-vital teeth with apical periodontitis or failed cases concerning relevant patient outcomes.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1: PRISMA checklist.\n\nClick here to access the data\n\n\nReferences\n\nSundqvist G: Bacteriologic studies of necrotic dental pulps [PhD thesis]. Umeå, Sweden: University of Umeå; 1976. Reference Source\n\nMöller AJ, Fabricius L, Dahlén G, et al.: Influence on periapical tissues of indigenous oral bacteria and necrotic pulp tissue in monkeys. Scand J Dent Res. 1981; 89(6): 475–84. PubMed Abstract | Publisher Full Text\n\nGursoy H, Ozcakir-Tomruk C, Tanalp J, et al.: Photodynamic therapy in dentistry: a literature review. Clin Oral Investig. 2013; 17(4): 1113–25. PubMed Abstract | Publisher Full Text\n\nSpångberg LSW, Haapasalo M: Rationale and efficacy of root canal medicaments and root filling materials with emphasis on treatment outcome. Endod Topics. 2002; 2(1): 35–58. Publisher Full Text\n\nCalişkan MK: Prognosis of large cyst-like periapical lesions following nonsurgical root canal treatment: a clinical review. Int Endod J. 2004; 37(6): 408–16. PubMed Abstract | Publisher Full Text\n\nSiqueira JF Jr, Lopes HP: Mechanisms of antimicrobial activity of calcium hydroxide: a critical review. Int Endod J. 1999; 32(5): 361–369. PubMed Abstract | Publisher Full Text\n\nSato T, Hoshino E, Uematsu H, et al.: In vitro antimicrobial susceptibility to combinations of drugs on bacteria from carious and endodontic lesions of human deciduous teeth. Oral Microbiol Immunol. 1993; 8(3): 172–6. PubMed Abstract | Publisher Full Text\n\nSato I, Ando-Kurihara N, Kota K, et al.: Sterilization of infected root-canal dentine by topical application of a mixture of ciprofloxacin, metronidazole and minocycline in situ. Int Endod J. 1996; 29(2): 118–24. PubMed Abstract | Publisher Full Text\n\nLiberati A, Altman DG, Tetzlaff J, et al.: The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009; 6(7): e1000100. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnjaneyulu K, Nivedhitha MS: Influence of calcium hydroxide on the post-treatment pain in Endodontics: A systematic review. J Conserv Dent. 2014; 17(3): 200–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSanu N, Sindhu R: Comparative evaluation of postoperative pain with different intracanal medicaments? A Systematic Review. AJPTR. 2016; 6(5): 34–61. Reference Source\n\nHiggins JPT, Green S: Cochrane Handbook for Systematic Reviews of Interventions version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Reference Source\n\nHiggins JPT, Sterne JAC, Savović J, et al.: A revised tool for assessing risk of bias in randomized trials. In: CHANDLER J, MCKENZIE J, BOUTRON I, WELCH V (editors). Cochrane Methods. Cochrane Database of Systematic Reviews, 2016.\n\nPrasad LK, Tanwar BS, Kumar KN: Comparison of calcium hydroxide and triple antibiotic paste as intracanal medicament in emergency pain reduction: in vivo study. Int J Oral Care Res. 2016; 4(4): 244–6. Publisher Full Text\n\nSanu N, Sindhu R: Comparative evaluation of inter appointment pain with calcium hydroxide and triple antibiotic paste as intracanal medicaments in patients with apical periodontitis: A randomized controlled clinical trial. J Dent Res. 2016; 95: 0272. Reference Source\n\nPai S, Vivekananda Pai AR, Thomas MS, et al.: Effect of calcium hydroxide and triple antibiotic paste as intracanal medicaments on the incidence of inter-appointment flare-up in diabetic patients: An in vivo study. J Conserv Dent. 2014; 17(3): 208–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJohns DA, Varughese JM, Thomas K, et al.: Clinical and radiographical evaluation of the healing of large periapical lesions using triple antibiotic paste, photo activated disinfection and calcium hydroxide when used as root canal disinfectant. J Clin Exp Dent. 2014; 6(3): e230–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSinhal TM, Shah RRP, Shah N, et al.: Comparative evaluation of 2% chlorhexidine gel and triple antibiotic paste with calcium hydroxide paste on incidence of interappointment flare-up in diabetic patients: A randomized double-blinded clinical study. Endodontology. 2017; 29(2): 136–41. Publisher Full Text\n\nUyan HM, Olcay K, Özcan M: Comparative evaluation of postoperative pain intensity after single-visit and multiple-visit retreatment cases: a prospective randomized clinical trial. Brazilian Dent Sci. 2018; 21(1): 26–36. Publisher Full Text\n\nAbbott PV: Medicaments: aids to success in endodontics. Part 2. Clinical recommendations. Aust Dent J. 1990; 35(6): 491–496. PubMed Abstract | Publisher Full Text\n\nMohammadi Z, Abbott PV: On the local applications of antibiotics and antibiotic-based agents in endodontics and dental traumatology. Int Endod J. 2009; 42(7): 555–67. PubMed Abstract | Publisher Full Text\n\nFergusson D, Aron SD, Guyatt G, et al.: Post-randomisation exclusions: the intention to treat principle and excluding patients from analysis. BMJ. 2002; 325(7365): 652–4. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "45204",
"date": "23 May 2019",
"name": "A. R. Vivekananda Pai",
"expertise": [
"Reviewer Expertise Dr A. R. Vivekananda Pai: Restorative dental materials",
"Restorative & Esthetic dentistry",
"and Endodontics. Dr Sumanth Kumbargere Nagraj: Evidence Based Health Care."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIntroduction:\nNo comments.\n\nMethods:\nThe authors have to mention if the data was extracted in duplicate or not. If so, who were those authors? If not, why was it not considered? Same applies to risk of bias assessment.\n\n\"Risk of bias\" could be abbreviated as \"RoB\" when it is used as a term for the first time in the text as the expanded word for the abbreviation RoB, which is mostly stated in the text as a tool, is not found.\n\nRisk of bias table should have quotes or relevant statements from the article to justify the risk judgement.\n\nIt is not clear what measures were adopted to address any disagreement among the authors while following methodology such as data extraction and risk of bias assessment.\n\nIt is not clear why the randomized clinical trial (RCT) filter was not used in the search strategy.\n\nAs there were no restrictions in the language, the authors could have searched in Asian and Latin American databases.\n\nThe search dates for each database are not clear.\n\nIt is good practice to grade the quality of evidence.\n\nDiscussion:\nSuggested grammatical corrections:\nPage no. 7, Paragraph no. 1, Sentence line no. 12:\n- Stated as: post-operative pain, flare-up incidence, clinical and radiographic healing. - Suggested correction: post-operative pain, flare-up incidence, and clinical and radiographic healing.\n\nPage no. 7, Paragraph no. 6, Sentence line no. 4:\n- Stated as: assessment15,16, which, if they were available, might have… - Suggested correction: assessment15,16, which if were available, might have…\n\nPage no. 7, Paragraph no. 7, Sentence line no. 4-6:\n- Stated as: The number of randomized clinical studies assessing triple antibiotic paste effectiveness as an intra-canal medication is few. - Suggested correction: The number of randomized clinical studies assessing triple antibiotic paste effectiveness as an intra-canal medication is less.\n\nNote:\nSince the answer is \"Partly\" to the 2nd evaluation criteria in the peer review form i.e. “Are sufficient details of the methods and analysis provided to allow replication by others?”, the following suggestions are recommended to the authors:\nNeed for data extraction and risk of bias assessment in duplicate.\n\nProvide quotes or relevant statements from the article to justify the risk judgement in the risk of bias table.\n\nState clearly measures adopted to address any disagreement among the authors during data extraction and risk of bias assessment.\n\nUse randomized clinical trial (RCT) filter in the search strategy.\n\nFurther search for articles in Asian and Latin American databases.\n\nState the search dates for each database.\n\nSpecify the grade of the quality of evidence.\n\nAre the rationale for, and objectives of, the Systematic Review clearly stated? Yes\n\nAre sufficient details of the methods and analysis provided to allow replication by others? Partly\n\nIs the statistical analysis and its interpretation appropriate? Yes\n\nAre the conclusions drawn adequately supported by the results presented in the review? Yes",
"responses": []
},
{
"id": "90674",
"date": "29 Jul 2021",
"name": "Vivek Aggarwal",
"expertise": [
"Reviewer Expertise There are limited studies on this topic. The authors have included all the studies and presented a narrative review."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have presented a systematic review evaluating the efficacy of triple antibiotic paste (vs other medications) in patients with apical periodontitis. The study has been initially published on 11-Oct-2018. This is an invited review (invitation by the Editorial Team, F1000Research, and dated 28-Jul 2021) of this paper. Triple antibiotic pastes are commonly used in regenerative endodontics. However, potential discoloration has led to the use of double antibiotic paste. Since the review is performed 3 years after the publication of the study, I shall limit my comments to the methodology and the reliability of the data.\nThe systematic review (SR) and meta-analysis are considered the highest evidence for treatment planning. The SRs require a ‘priori’ design/protocol to be prospectively registered at databases such as PROSPERO. This manuscript has been registered at PROSPERO on 28-Aug-2018 and subsequently revised on 31-Oct-2018. The protocol matches with the reported data and there has been no selective reporting.\nIntroduction: A rationale for the study is required. The introduction should be expanded to include a suitable background for this study.\nFor an accurate SR analysis, a good PICO question should be developed. The current review states ‘primary outcomes were post-operative pain and flare-up incidence after the first visit as defined by the trial authors. The secondary outcome was clinical and radiographic healing as defined by the trial authors with at least one-year follow-up’. Out of three included studies, Johns et al. did not evaluate the post-operative pain (i.e. the primary outcome), and rest other two studies did not evaluate the secondary outcome.\nConsidering the limitations of the data, the authors have accurately decided to perform a narrative review. It would be better to report the 95% confidence intervals for all the data. Raw data, if required, can be requested from the individual authors.\n\nAre the rationale for, and objectives of, the Systematic Review clearly stated? Partly\n\nAre sufficient details of the methods and analysis provided to allow replication by others? Yes\n\nIs the statistical analysis and its interpretation appropriate? Not applicable\n\nAre the conclusions drawn adequately supported by the results presented in the review? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1627
|
https://f1000research.com/articles/7-1625/v1
|
10 Oct 18
|
{
"type": "Research Article",
"title": "Risk factors for diphtheria outbreak in children aged 1-10 years in East Kalimantan Province, Indonesia",
"authors": [
"Iwan Muhamad Ramdan",
"Rahmi Susanti",
"Riza Hayati Ifroh",
"Reny Noviasty",
"Rahmi Susanti",
"Riza Hayati Ifroh",
"Reny Noviasty"
],
"abstract": "Background: Diphtheria remains a health problem, especially in developing countries. In November 2017, the Indonesian Ministry of Health stated that there was a diphtheria outbreak in Indonesia. East Kalimantan is one of the provinces that experienced this disease outbreak. This study analyzes the risk factors for diphtheria outbreak in children aged 1-10 years. Methods: A case-control study was conducted on 37 respondents. Research variables consist of immunization status against diphtheria, pertussis and tetanus (DPT), nutritional status, children mobility, source of transmission, physical home environment (natural lighting, ventilation area, occupancy density, wall and floor type), knowledge of diphtheria and attitudes towards the diphtheria prevention program. Results: We found that the most of the children who had diphtheria had been immunized against DPT. Additionally the nutritional status of children (p=0.049), mobility (p=0.000) and the source of transmission (p=0.020) were significantly associated with diphtheria. Conclusions: Child/parent mobility (OR=8.456) is the main risk factor for diphtheria outbreak. It is recommended to limit the mobility of children to travel to areas that are experiencing increased cases of diphtheria, improve the nutritional status, and further research on the effectiveness of diphtheria vaccine.",
"keywords": [
"Pediatrics Diphtheria",
"immunization status",
"nutrition status",
"mobility",
"source of transmission",
"knowledge and attitude",
"physical home environment."
],
"content": "Introduction\n\nAlthough vaccination programs have succeeded in reducing the incidence of diphtheria in the world, diphtheria remains a health problem, especially in the Asian region. The World Health Organization reports that the number of diphtheria in 2013 was 4,680 cases which were widespread and mostly concentrated in the Asian continent, including India (3,313 cases), Indonesia (775 cases), Iran (190 cases), Pakistan (183 cases), and Nepal (103 cases). Indonesia has the second highest number of diphtheria cases, with 775 cases1,2.\n\nIn November 2017, the Indonesian Ministry of Health stated that there was a diphtheria outbreak in Indonesia. This is based on reports from various provincial health offices, with 593 cases documented between 1 January and 1 November 2017. There was a surge in the number of cases. Previously, there were 415 cases in 2016, 502 cases in 2015 and 502 cases in 2014. East Kalimantan is one of the provinces that experienced a diphtheria outbreak, with all cases occurring in children aged 1–10 years3.\n\nDiphtheria, taken from Greek \"Diphtera\", which means leather hide, was first identified by Hippocrates in the 5th century BC4. This disease mostly occurs in children under 5 years of age, but currently occurs in children over 5 years (5–19 years) and in adults5. Several studies have shown that low vaccination coverage, crowding and migration, or a combination of host, agent, and environmental factors, can influence the incidence of diphtheria6,7. Other factors include nutritional status and parental behavior, personal hygiene of children8, density of house occupancy, humidity in the house, type of floor of the house and the source of transmission (contracting from other people), parents knowledge about diphtheria9, parent education level10,11, child age, home lighting, and house ventilation12.\n\nThis study aims to determine the risk factors for diphtheria outbreaks in children aged 1–10 years in the East Kalimantan province of Indonesia, by involving immunization factors, children's factors, home environmental factors and parents' knowledge and attitude factors.\n\n\nMethods\n\nA case control study was conducted on 37 respondents (18 cases, children with diphtheria and 19 controls, healthy children), between April to August 2018, located in six districts in the province of East Kalimantan (City of Samarinda, Bontang, Balikpapan and Districts of Kutai Kartanegara, Kutai Timur and Berau). The population approached for recruitment was all children aged 1–10 years with diphtheria recorded in the East Kalimantan provincial health office from January 1, 2017 to March 1, 2018. The study began after the researcher obtained the permission and address of the child suffering from diphtheria from the relevant authorities. Data collection was conducted through visiting the home of each child suffering from diphtheria (case) and neighbors or live close to a case group, and obtaining written informed consent from a parent/guardian.\n\nThe case group was formed of children suffering from diphtheria, with inclusion criteria: age 1–10 years, recorded in the East Kalimantan Provincial Health Office register from January 2017–February 2018, residing in the city of Balikpapan, City of Samarinda, City of Bontang, District of Kutai Timur, District of Kutai Kartanegara, and District of Berau, did not move to another area, the house that occupied had not been renovated from 1 week before the child suffering from diphtheria until the data collection, the families of the patients were willing to become respondents and were willing to be interviewed.\n\nThe control group was formed of children who did not have diphtheria, with the following inclusion criteria: aged 1–10 years, residing in the City of Balikpapan, City of Samarinda, City of Bontang, District of Kutai Timur, District of Kutai Kartanegara, and District of Berau, being a neighbor of the child with diphtheria/living in one area with a case group, not to move to another area, the house that occupied was not renovated from one week before the neighboring child was suffering from diphtheria until the time of data collection, the children’s family willing to become a respondent and willing to be interviewed.\n\nAll children with diphtheria were used as respondents (total sampling), while the control group was obtained using non-random sampling techniques. The control group was recruited by identifying children who met the inclusion criteria that were friends with those in the case group or lived nearby.\n\nThe dependent variable in this study was diphtheria, while the independent variables consisted of age, gender, DPT immunization status, nutritional status, childhood mobility (a travel history to an area that is experiencing in cases of diphtheria), source of transmission (friends at school or neighbors who are experiencing of diphtheria), the house’s physical environment (natural lighting, house ventilation, occupancy density, type of wall and floor), knowledge of diphtheria and attitude towards the diphtheria prevention program.\n\nAdministered structured questionnaire and an observation checklist were used to collect data. The questionnaire and observation checklist used in this study consists of eight sections. Section A: Socio demographic information (initial name, place and date of birthaddress); Section B: Immunization status (data obtained by interview and confirmed by the immunization card for each child); Section C: Nutritional status (height and weight of the children, then calculation of body mass index); Section D: physical home environment (natural lighting in the house and bedroom, the width of the house ventilation, the floor area of the house, the number of people sleeping in a room with children suffering from diphtheria, the type of house wall, the type of house floor); Section E: Source of transmission (history of direct contact with a friend suffering from diphtheria in a home environment or at school); Section F: Mobility (history of child traveling/staying outside the city of domicile, one week before illness); Section G: Knowledge of diphtheria (causes, signs and symptoms, modes of transmission, benefits of DPT immunization, other prevention methods); Section H: attitude against diphtheria prevention program (favorable or unfavorable). Dataset 1 contains all de-identified responses to the questionnaire13.\n\nTo reduce interview bias, researchers provide adequate explanations before the interview begins, motivated respondents to give honest answers, questionnaires are arranged in simple language and easily understood and provides sufficient time for interviews. The determination of DPT immunization status, nutritional status and healthy housing standards are in line with those described by the Indonesian Health Ministry regulations14–16.\n\nData were analyzed using chi square and multiple logistic regression. To see the risk factors related to Diphtheria, an odds ratio (OR) with a 95% confidence interval was calculated. Data analysis using the Statistical Package for the Social Sciences (SPSS ver. 21, Chicago, IL, USA).\n\nThe study was reviewed and approved by the Ethical Commission of Health and Medical Research, Faculty of Medicine, Mulawarman University Indonesia, (approval number: 42/KEPK-FK/V/2018), which refers to The International Ethical Guidelines for Biomedical Research Involving Human Subjects and The international ethical guidelines for epidemiological studies, from Council for International Organizational Organizations of Medical Sciences (CIOMS 2016). Informed written consent was obtained from a parent or guardian of the participants prior to their participation. The informed consent stated the purpose of the study, data confidentiality, and the voluntary right of participation in the study, as well as provided the guarantee that no participant suffered any harm as a result of his/her participation in the study.\n\n\nResults\n\nThe sex of the case group was mostly male (66.6%), age was mostly > 5–10 years (66.6%), DPT immunization status was mostly complete (83.3%), nutritional status was mostly bad (72.2%), mobility of the children was mostly “yes” (61.15%), source of contamination was mostly “no” (77.7%), knowledge of diphtheria was balanced between good and bad (50%), attitude towards the diphtheria prevention program was mostly favorable (55.5%), wide of home ventilation was mostly bad (77.7%), home density of occupancy was mostly good (72.2%), home wall type was mostly made from concrete brick without plastering (61.1%) and home floor type was mostly ceramics (66.6%).\n\nThe sex of the control group were mostly male (52.6%), the age was mostly 1–5 years (52.6%), DPT immunization status was mostly complete (63.1%), nutritional status was mostly good (63.1%), mobility of the children was mostly “yes” (84.2%), source of contamination was mostly “yes” (63.1%), knowledge of diphtheria was mostly good (52.6%), attitude towards the diphtheria prevention program was mostly favorable (52.6%), wide of home ventilation was mostly bad (68.4%), home density of occupancy was mostly good (63.1%), home wall type was mostly made from concrete brick without plastering (57.8%) and home floor type was mostly ceramics (63.1%) (Table 1 and Table 2).\n\nThe results of the bivariate test showed that nutritional status (p=0.049) (OR=4.457), mobility (p<0.001) (OR=6.812) and source of transmission (p=0.020) (OR=0.16) were significantly associated with the incidence of diphtheria in East Kalimantan Province, Indonesia (Table 2).\n\nMultivariate analysis performed on the variables which proved to be significantly associated with the incidence of diphtheria, i.e. nutritional status, mobility and source of transmission. The results show that mobility variables (OR=8.456) is the main risk factor for diphtheria in East Kalimantan Province. (Table 3).\n\nOR, odds ratio; CI, confidence interval.\n\n\nDiscussion\n\nThe results of univariate analysis demonstrated that most patients with diptheria had received complete DPT immunization. The result of bivariate analysis revealed no correlation between DPT immunization status and diphtheria infection. This result is notable, and indicates that further investigation is required on the effectiveness and potential of vaccines. A further example documented by Ningtyas et al.17, concerning cases of measles in children in Indonesia, also concluded that the incidence of measles in children remained high in areas with high measles immunization coverage; however, this was related to the effectiveness of vaccine quality due to health worker skill factors in providing vaccines and availability of vaccine facilities. Other studies have documented the variable thermolability of vaccines, caused by breaks in the cold chain, can lead to loss of vaccine potency18. The results of this study complement the findings of Dhinata et al.19, which found no correlation between patient immunization status and severity, or fatality of diphtheria in the Sampang District of Indonesia.\n\nComplete immunization status does not guarantee the child is free from the risk of diphtheria. Sadoh and Sadoh20 concluded that two out of three children with diphtheria in Nigeria had been completely immunized against DPT, and suggested the use of DT boosters in developing countries. Previously, Gowin et al.21 proved that even though tetanus and diphtheria antibody concentrations are quite high in children that have been immunized, the percentage of children protected against diphtheria is smaller than those against tetanus. Likewise, the results of research by Phadke et al.22, revealed that several pertussis outbreaks in United States also occurred in highly vaccinated populations, and indicating waning immunity.\n\nWe found the nutritional status of children was significantly associated with the incidence of diphtheria. The results of this study are consistent with other studies that concluded nutritional status associated with increased risk and/or severity of infections disease23; Children's nutritional status is significantly associated with diphtheria in Situbondo Indonesia24, Children's nutritional status and immune deficiencies reduce the body's response to vaccines25,26. The implications of this finding are, to reduce the risk of the occurrence of diphtheria in children, the improvement of nutrition is absolutely necessary.\n\nThe results prove that the mobility of respondents (travel history to an area that is experiencing a surge in cases of diphtheria) is significantly related to the incidence of diphtheria, this result is consistent with other studies by Patil et al.27 which concludes the mobility creates a vulnerability of pediatrics diphtheria outbreak in district of central India. Population migration increases the risk of transmission of infectious diseases28, transmission of measles, rubella, diphtheria, tetanus, polio and Haemophilus influenzae is strongly influenced by population mobility29. High mobility, poor living conditions, and barriers to accessing healthcare are risk factors to facilitate the spread of infectious diseases such as tuberculosis (active and latent), HIV, hepatitis B, hepatitis C, measles, mumps, rubella, diphtheria, tetanus, pertussis, H. influenzae type b, strongyloidiasis and schistosomiasis30. Based on this conclusion, the prohibition or limitation of children/parents visiting areas that are experiencing diphtheria outbreaks should be recommended so that the risk of transmission is reduced\n\n\nConclusion\n\nNutritional status, child mobility and source of transmission were significantly associated with diphtheria. Most children who had diphtheria (83.3%) had received complete immunization of DPT. Mobility of children is the main risk factor of diphtheria. It is recommended to forbid children/parents to visiting the area where a diphtheria outbreak is occurring, and to improve the condition of the child's nutritional status. Further research is needed on the effectiveness of diphtheria vaccine in East Kalimantan Province, Indonesia.\n\n\nData availability\n\nDataset 1. All raw data and demographic information obtained from subjects during the present study. DOI: https://doi.org/10.5256/f1000research.16433.d22082513.",
"appendix": "Grant information\n\nThis work was supported by Islamic Development Bank (IDB), Development of Four Higher Education Institution, Project Implementation unit of Mulawarman University of Indonesia.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe author are grateful to all of respondent in this research, Rector of Mulawarman University and Islamic Development Bank.\n\n\nReferences\n\nWHO: Diptheria Reported Case. 2014. Reference Source\n\nInfoDATIN: Situasi Imunisasi di Indonesia (Immunization situation in Indonesia). 2016. Reference Source\n\nMinister of Health, Indonesia: Imunisasi Efektif Cegah Dipteri (Effective Immunization can Prevent Diphtheria). Reference Source\n\nCDC: Corynebacterium diphtheriae. In: Epidemiology & Prevention of Vaccine-Preventable Diseases. United States: Centers for Disease Control and Prevention; 2015; 107–118. Reference Source\n\nMurhekar MV, Bitragunta S: Persistence of diphtheria in India. Indian J Community Med. 2011; 36(2): 164–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBisgard KM, Rhodes P, Hardy IR, et al.: Diphtheria toxoid vaccine effectiveness: A case-control study in Russia. J Infect Dis. 2000; 181 Suppl 1: S184–S187. PubMed Abstract | Publisher Full Text\n\nNanthavong N, Black AP, Nouanthong P, et al.: Diphtheria in Lao PDR: Insufficient Coverage or Ineffective Vaccine? PLoS One. 2015; 10(4): e0121749. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMukarami H, Phuong NM, Thang HV, et al.: Endemic diphtheria in Ho Chi Minh City; Viet Nam: a matched case-control study to identify risk factors of incidence. Vaccine. 2010; 28(51): 8141–8146. PubMed Abstract | Publisher Full Text\n\nKartono B, Purwana R, Djaja I: Hubungan antara lingkungan rumah dengan kejadian Difteri di Kabupaten Garut dan Tasikmalaya, Indonesia) Correlation between home environment with Diphtheria outbreak in Tasikmalaya and Garut Districk of Indonesia. Makara, Kesehatan Indones. 2008; 12(1): 8–12. Reference Source\n\nArifin I, Prasasti C: FAKTOR YANG BERHUBUNGAN DENGAN KASUS DIFTERI ANAK DI PUSKESMAS BANGKALAN, INDONESIA (Related factors to Diphtheria case in children in Bangkalan health center, Indonesia). J Berk Epidemiol. 2016; 5(1): 26–36. Publisher Full Text\n\nGarib Z, Danovaro-Holliday MC, Tavarez Y, et al.: Diphtheria in the Dominican Republic: reduction of cases following a large outbreak. Rev Panam Salud Publica. 2015; 38(4): 292–299. PubMed Abstract\n\nSaifudin N, Wahyuni C, Martini S: FAKTOR RISIKO KEJADIAN DIFTERI DI KABUPATEN BLITAR, INDONESIA) (RISK FACTOR OF DIPHTHERIA INCIDENCE IN BLITAR, INDONESIA). J Wiyata Indones. 2016; 3(2): 61–66. Reference Source\n\nMuhamad Ramdan I, Susanti R, Ifroh RH, et al.: Dataset 1 in: Risk Factors of Diphtheria Outbreak in Children aged 1-10 years in East Kalimantan Province of Indonesia. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16433.d220825\n\nMenteri Kesehatan RI: Peraturan Menteri Kesehatan Republik Indonesia Nomor 12 Tahun 2017 tentang Penyelenggaraan Imunisasi. Kementeri Kesehat. 2017; 1–162. Reference Source\n\nKemenkes: Standar Antropometri Penilaian Status Gizi Anak. 2010; 40. Reference Source\n\nKesehatan M, Indonesia R: Keputusan Menteri Kesehatan No. 829 Tahun 1999 Tentang: Persyaratan Kesehatan Perumahan. 1999; (829). Reference Source\n\nNingtyas DW, Wibowo A: Pengaruh Kualitas Vaksin Campak Terhadap Kejadian Campak Di Kabupaten Pasuruan. J Berk Epidemiol. 2015; 3(3): 315–326. Reference Source\n\nKristensen D, Chen D, Cummings R: Vaccine stabilization: research, commercialization, and potential impact. Vaccine. 2011; 29(41): 7122–7124. PubMed Abstract | Publisher Full Text\n\nDhinata KS, Atika A, Husada D, et al.: Correlation between immunization status and pediatric diphtheria patients outcomes in the Sampang District, 2011-2015. Paediatr Indones. 2018; 58(3): 110–115. Publisher Full Text\n\nSadoh AE, Sadoh WE: Diphtheria mortality in Nigeria: the need to stock diphtheria antitoxin. African J Clin Exp Microbiol. 2011; 12(2): 82–85. Publisher Full Text\n\nGowin E, Wysocki J, Kałużna E, et al.: Does vaccination ensure protection? Assessing diphtheria and tetanus antibody levels in a population of healthy children: A cross-sectional study. Medicine (Baltimore). 2016; 95(49): e5571. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPhadke VK, Bednarczyk RA, Salmon DA, et al.: Association Between Vaccine Refusal and Vaccine-Preventable Diseases in the United States: A Review of Measles and Pertussis. JAMA. 2016; 315(11): 1149–1158. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrendergast AJ: Malnutrition and vaccination in developing countries. Philos Trans R Soc Lond B Biol Sci. 2015; 370(1671): pii: 20140141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSundoko TW, Rasni H, Hardiani RS: Hubungan antara peran orang tua dan faktor risiko kejadian Difteri di Kabupaten Situbondi, Indonesia (Correlation Between the Role of Parents and Risk of Diphtheria Subdistrict at Situbondo Regency, Indonesia.). J Heal Libr Indones. 2015; 3(1): 96–102. Reference Source\n\nLalor MK, Floyd S, Gorak-Stolinska P, et al.: BCG vaccination: a role for vitamin D? PLoS One. 2011; 6(1): e16709. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaufman DR, De Calisto J, Simmons NL, et al.: Vitamin A deficiency impairs vaccine-elicited gastrointestinal immunity. J Immunol. 2011; 187(4): 1877–1883. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatil N, Gawade N, Gaidhane A, et al.: Investigating diphtheria outbreak: A qualitative study in rural area. Int J Med Sci Public Heal. 2014; 3(4): 513–516. Publisher Full Text\n\nGushulak BD, MacPherson DW: Globalization of infectious diseases: the impact of migration. Clin Infect Dis. 2004; 38(12): 1742–1748. PubMed Abstract | Publisher Full Text\n\nCastelli F, Sulis G: Migration and infectious diseases. Clin Microbiol Infect. 2017; 23(5): 283–289. PubMed Abstract | Publisher Full Text\n\nPottie K, Mayhew AD, Morton RL, et al.: Prevention and assessment of infectious diseases among children and adult migrants arriving to the European Union/European Economic Association: a protocol for a suite of systematic reviews for public health and health systems. BMJ Open. 2017; 7(9): e014608. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "39315",
"date": "19 Oct 2018",
"name": "Soedjajadi Keman",
"expertise": [
"Reviewer Expertise Public Health",
"especially Environmental and Occupational Health"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis research article is very good. However, please complete the conclusion in the Abstract: Child/parent mobility (OR=8.456) is the main risk factor for diphtheria outbreak and the nutritional status of children and the source of transmission as well.\nThe keywords, introduction, methods, results (including statistical analysis), discussion, conclusion and references are okay.\nAdditional comments:\n1.\n\nThe study design is appropriate and the work is technically sound. 2.\n\nThe method is quite detailed and analysis provided is absolutely perfect to allow replication by other researchers. 3.\n\nThe statistical analysis is correct and its interpretation is also appropriate. 4.\n\nThe conclusions drawn should be: the main risk factors for diphtheria outbreak are children mobility, source of transmission, and nutritional status. It is recommended to the parents to limit mobility of their children to areas that are experiencing increased cases of diphtheria and improve their childrens’ nutritional status as well. It is recommended for further study to analyze the effectiveness of diphtheria vaccines since both study and control group have already got diphtheria vaccination.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "42801",
"date": "06 Nov 2019",
"name": "Yves Buisson",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting, well-structured study analyzing the risk factors for contracting diphtheria in Indonesian children. The article is well written but some improvements are needed before indexing:\nWhat were the criteria for defining cases of diphtheria? Only clinically or after bacteriological confirmation?\n\nSpecify the age group >5-10 years: is it 5-10, or >10, or both (>5)?\n\nIn \"Methods - statistical analysis\", it must be stipulated that only the parameters giving a p <0.05 are entered in the logistic regression.\n\nThe results obtained (cases better vaccinated than the controls and better nutritional status in the controls than in the cases) should lead to discuss more deeply the possibility of a weaker immune response among the cases and suggest a complementary study with dosage of post- vaccine antibodies.\n\nA source of contamination was found among the controls, not among the cases; it is in contradiction with the fact that among diphtheria cases, mobility in a region experiencing a recrudescence of diphtheria proves to be the main risk factor. Such discrepancy should be analysed in the discussion.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1625
|
https://f1000research.com/articles/7-1098/v1
|
17 Jul 18
|
{
"type": "Method Article",
"title": "Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer",
"authors": [
"Gerardo Chacón",
"Johel E. Rodríguez",
"Valmore Bermúdez",
"Miguel Vera",
"Juan Diego Hernández",
"Sandra Vargas",
"Aldo Pardo",
"Carlos Lameda",
"Delia Madriz",
"Antonio J. Bravo",
"Johel E. Rodríguez",
"Valmore Bermúdez",
"Miguel Vera",
"Juan Diego Hernández",
"Sandra Vargas",
"Aldo Pardo",
"Carlos Lameda",
"Delia Madriz",
"Antonio J. Bravo"
],
"abstract": "Background: The multi–slice computerized tomography (MSCT) is a medical imaging modality that has been used to determine the size and location of the stomach cancer. Additionally, MSCT is considered the best modality for the staging of gastric cancer. One way to assess the type 2 cancer of stomach is by detecting the pathological structure with an image segmentation approach. The tumor segmentation of MSCT gastric cancer images enables the diagnosis of the disease condition, for a given patient, without using an invasive method as surgical intervention. Methods: This approach consists of three stages. The initial stage, an image enhancement, consists of a method for correcting non homogeneities present in the background of MSCT images. Then, a segmentation stage using a clustering method allows to obtain the adenocarcinoma morphology. In the third stage, the pathology region is reconstructed and then visualized with a three–dimensional (3–D) computer graphics procedure based on marching cubes algorithm. In order to validate the segmentations, the Dice score is used as a metric function useful for comparing the segmentations obtained using the proposed method with respect to ground truth volumes traced by a clinician. Results: A total of 8 datasets available for patients diagnosed, from the cancer data collection of the project, Cancer Genome Atlas Stomach Adenocarcinoma (TCGASTAD) is considered in this research. The volume of the type 2 stomach tumor is estimated from the 3–D shape computationally segmented from the each dataset. These 3–D shapes are computationally reconstructed and then used to assess the morphopathology macroscopic features of this cancer. Conclusions: The segmentations obtained are useful for assessing qualitatively and quantitatively the stomach type 2 cancer. In addition, this type of segmentation allows the development of computational models that allow the planning of virtual surgical processes related to type 2 cancer.",
"keywords": [
"Stomach tumor",
"type 2 cancer",
"medical imaging",
"multi–slice computerized tomography",
"image enhancement",
"region growing method",
"marching cubes",
"three-dimensional reconstruction"
],
"content": "Introduction\n\nComputed tomography (CT) is a 3-D medical imaging tool, with extensive beneficial impact on the diagnosis, characterization and explanation of complex health issues1. The latest development in spiral CT with electrocardiogram gating technology is with multislice CT (MSCT) which has allowed for the acquisition of large volumes data and for the obtainment of dynamic volume imaging2,3.\n\nMSCT of the abdomen has been used to determine the size and location of stomach cancers4. This imaging modality has allowed for the staging of stomach cancer tumors. In general, MSCT is considered the best modality for the staging of gastric cancer and it allows for the assessment of local tumor extension, nodal disease and metastases through a non-invasive clinical procedure5.\n\nThe ability of multi-detector systems to acquire wider areas of the abdomen than simple detector systems prevents the generation of image motion artifacts that are mainly due to the long breath-holds required. In this sense, the total abdomen acquisition requires shorter acquisition times, which decreases the amount of contrast product that needs to be used and increases the resolution of the 3-D images.\n\nThe manual assessment of the volume of adenocarcinoma of the stomach by CT requires the setting of the window width and window level in Hounsfield units, values that are generated by the subjective estimation of the clinician. In addition, the tumor must be delineated manually in each slice of the 3-D MSCT image. Following this, the area of the tumor is calculated according to the delineated region on each axial image plane. Finally, each area of the contiguous transverse tumor slice is summed to compute the whole-tumor volume.\n\nTo analyze the features and/or spatial distribution of functional regions of anatomic tissues or organs from medical images, segmentation, as an image processing technique, has been extensively used6. Segmentation has been also used as preprocessing technique to extract the information required for processing techniques, such as diagnosis or quantification7, visualization8, and compression, storage and transmission9,10. The objective of segmentation is organizing and grouping the set of shapes contained in the images using the proximity, similarity and continuity of the shapes as the organization and grouping criteria11,12.\n\nThe two basic kinds of medical image segmentation techniques are based on the delineation of a curve that defines the anatomical structures13, and the application of pattern classification methods14. Both kinds allow for representation of the image as a non-overlapped set of two regions (the subject of interest and the background).\n\nAccording to the Borrmann classification15, type 2 cancer is an ulcerated but circumscribed advanced cancer. This cancer type is ulcerated, with partial marginal elevation and partial diffuse dissemination; it is frequently located in the antro and lesser curvature. Figure 1 shows the shape of type 2 cancer.\n\nThe Japanese Gastric Cancer Association16,17 typified type 2 cancer as an advanced cancer that deepens and invades the muscular or subserosa layer. Type 2 or superficial cancer is classified as elevated or slightly elevated (less than 5 mm), flattened or flat, and depressed (Figure 2).\n\nThis study presents the outcome of the development of a computational approach for assessing of type 2 stomach cancer from MSCT abdominal images. The approach consists of three stages. The initial stage, an image enhancement, consists of a method for correcting non-homogeneities present in the background of MSCT images. Next, a segmentation stage using a clustering method allows to obtain the adenocarcinoma morphology. In the third stage, the pathology region is reconstructed and then visualized with a 3-D computer graphics procedure based on marching cubes\n\nThe computational approach proposed herein adds to the academic contributions to the medical field, specifically the diagnosis and/or treatment of pathologies that require further scientific advances to increase the rate of curability. The social impact of this research is notable, because MSCT is a diagnostic technique with a lower cost than more invasive methods used to assess gastric cancer. This converges with the recommendations of the Pan American Health Organization18, the World Health Organization19 and the International Agency for Research on Cancer20 for the prevention and control of this disease in order to facilitate the application of treatment methods based on scientific data.\n\nThe main objective of this research is to propose a computational approach to automatically detect the morphopatological shape of adenocarcinoma of the stomach. The proposal is based on a sequential design which involves image enhancement, segmentation, and three-dimensional image visualization. After the application of the proposed approach, a detailed analysis, both in a quantitative and a qualitative way is presented, by providing a measure of performance, and the assessment of several qualitative type 2 cancer features.\n\n\nMethods\n\nThe minimum system requirements needed to run the MatLab scripts contianed within this study are: 2.2 GB of HDD space for MATLAB only, 4–6 GB for a typical installation, any Intel or AMD x86 processor, 4 GB of memory RAM; no specific graphics card is required.\n\nThe dataset considered in this research was obtained from The Cancer Genome Atlas–Stomach Adenocarcinoma (TCGA–STAD)21. The dataset connects cancer phenotypes to genotypes using medical images matched with subjects from TCGA22,23. Table 1 shows the descriptive phenotypes and histological parameters of eight patients from TCGA–STAD with type 2 cancer. The TCGA-STAD dataset is composed of a single series, namely TCGA-STAD-VQ. The eight datasets used correspond with the patients of the series TCGA-STAD-VQ with type 2 cancer.\n\nT, tumor stage; N, lymph node stage; M, metastasis stage.\n\nThe software used in this phase was developed within the framework of the present investigation and corresponds with a MatLab script (MatLab R2012a). This script corresponds with Enhancement Software, and is available on Zenodo24.\n\nAs the gastric mucosa has many folds and is formed by connective tissue that joins the muscle and the mucosa, the abdominal tomography images produced in the interface between the mucous and the contrast agent are, in certain regions, not homogeneous, and consequently the tumor is shown with unclearly differentiated edges.\n\nAn enhancement approach is required to improve the adenocarcinoma information with respect to the non-homogeneous background. To correct the non-homogeneities and to enhance the adenocarcinoma, the look-up table (LUT)-based method25 is used. This method has been used to improve confocal microscopy26 and X-ray rotational angiography27 images. The LUT is constructed according to the following procedure:\n\n1. Choose n non-homogeneous images.\n\n2. Determine the frequency percentage vector f j = [f j0 f j1024 f j2048 f j3072 f j4095] (0 ≤ j ≤ n) for each image in the histogram. These frequencies are associated with the gray level vector, Level = [0 1024 2048 3072 4095].\n\n3. Obtain the average for the frequency percentage vector FLevel=∑j=1nfLevelj,∀0≤j≤4\n\n4. Construct a LUT as a transfer function defined by the concatenation of four linear transformations. Each linear function is constructed using InputLUTj=Levelj and OutputLUTj=∑k=0jFLevel,∀0≤j≤4 as input and output respectively.\n\nFigure 3 shows both LUTs, the original (black) and the constructed using the previous procedure (gray).\n\nOnce corrected, the image is smoothed using a Gaussian filter with a spread factor σ. This parameter is set to the standard deviation value of the corrected image. The relationship between corrected and smoothed images is obtained using a simple linear regression model28.\n\nSegmentation was performed using VolView 3.4 for Linux, 64 bit.\n\nThe segmentation is based on a simple linkage region growing technique. The following procedure is used:\n\n1. A voxel is tagged as a seed voxel of a new cluster when its intensity value is lower than the standard deviation of the enhanced image, and it is still unlabeled (the voxel is not associated to any cluster). The procedure is ended when all voxels are labeled.\n\n2. All neighbor voxels of the new cluster are eligible for merging. For each neighbor voxel, every voxel in an 8-neighborhood is considered. The neighbor voxel is joined to the current cluster if i) the neighbor voxel is still unlabeled; and ii) the intervoxel distance of the edge voxel and the neighbor voxel is below the standard deviation of the enhanced image. When the current cluster stops growing, go back to point 1.\n\nThis procedure is applied to the enhanced image to obtain two regions, the tumor and the background.\n\nAfter the segmentation process, the reconstruction of the pathological surface is performed using the Visualization Toolkit (VTK-6.3.0)29. VTK is an open-source library available for image processing and 3-D scientific visualization used by many researchers and worldwide developers. The reconstruction algorithm is designed according to an object-oriented computational model, which is developed with the C-class library contained in VTK30. This reconstruction procedure only requires the segmented volume as input parameter.\n\nThe tumor wall is reconstructed using the marching cubes algorithm31. Marching cubes have long been used as a standard indirect volume-rendering approach to extract iso-surfaces from 3-D volumetric data. The algorithm was developed by Lorensen and Cline32 and has this name because it takes eight neighboring locations simultaneously to constructing an imaginary cube, generating the necessary polygons to reconstruct the surface.\n\nThe validation of the proposed segmentation technique is performed by quantifying the difference between the estimated pathological shape with respect to a ground truth shape, traced by an expert (V.B.). The difference is estimated using the Dice coefficient, which quantifies the degree of overlap between two volumes33. Another MatLab script (in MatLab version R2012a) was also written within the framework of the present investigation to validate the proposed segmentation approach. Version 1.0 of this script is available on Zenodo34; its input parameters correspond with the type 2 cancer estimated shape and the ground truth shape.\n\n\nResults\n\nThe results obtained from this research are in part based upon data generated by TCGA Research Network: http://cancergenome.nih.gov/.\n\nThe MSCT 3-D image of the TCGA–VQ–A8DL dataset is used to illustrate the proposed three-phase approach. The non-homogeneous background correction procedure described in Phase 1: Image enhancement is applied to the dataset (Figure 4). The background appears to be more homogeneous, whereas the associated type 2 cancer tumor information is enhanced.\n\nFirst column, original images; second column, corrected images.\n\nThe parameter n required for the procedure of images enhancement (Phase 1: Image enhancement), which corresponds to the amount non-homogeneous images, is chosen as 20% of the slices of each tomographic volume to be analyzed. The TCGA–VQ–A8DL dataset contains an MSCT volume of 512×512×101; since the volume has 101 slices, n corresponds to 20 of those slices.\n\nThe seed voxel is used to start the region-growing segmentation process. This seed is established in the bi-dimensional image (MSCT slice). A manual process performed by a clinician is applied to locate the volume slice, where the adenocarcinoma area is visually maximized.\n\nFigure 5 shows the results of this method applied to the dataset TCGA–VQ–A8DL. In each row, four MSCT slices (the axial anatomical view) are shown in the image volume. The adenocarcinoma contour is indicated by a black dash-dotted line. Regions extracted using the proposed segmentation approach are indicated by white areas.\n\nFigure 6 shows the 3-D reconstruction obtained using the procedure based on the marching cubes algorithm. The input of the reconstruction algorithm is the adenocarcinoma shape, segmented using the procedure based on a region-growing technique. Table 2 shows the volumes quantified from the reconstructions.\n\nThe comparison between the segmented structures and the cancer shapes delineated by the clinician is performed based on the Dice coefficient. The error obtained (mean ± standard deviation) for the eight datasets of the 3-D images is 91.54% ± 5.26%, with a maximum value of 97.31% and a minimum value of 83.47%.\n\n\nDiscussion\n\nThe method proposed in this study can be used to generate 3-D segmentation of stomach adenocarcinoma. These segmentations are useful for monitoring the application of cancer treatment methods based on scientific data, since they allow for the calculation of certain clinical quantitative descriptors, such as volume, and for the assessment of several qualitative type 2 cancer features.\n\nThe method is tested on eight gastric cancer datasets. The estimated error for all MSCT images is reported. The proposed application for detecting type 2 cancer tumors generates the highest Dice score values.\n\nThe calculated volume represents a parameter that is more representative of the real size of all the segmented pathological structure as the descriptor is calculated from a realistic 3-D computational model. This is particularly true as tumor volume is an important indicator of lymph node metastasis in advanced gastric cancer. From Figure 6, several macroscopic features associated with type 2 cancer can be validated, such as: circumscribed, with well-defined borders, and ulcerated.\n\nThe segmentations generated by the proposed method can be useful in various scenarios such as:\n\n1. Academic–didactic: Promoting, deepening and potentiating the study of the real pathology.\n\n2. Research: Design and development of robust, automatic and efficient segmentation methods.\n\n3. Clinical: Supporting the planning of therapeutic and surgical processes associated with stomach cancer.\n\n\nConclusions\n\nA three-phase approach has been developed based on an image enhancement and region-growing clustering technique for segmenting stomach tumors associated with type 2 cancer. The segmentations obtained are useful for assessing this pathology. In addition, this type of segmentation allows for the development of computational models that allow the planning of virtual surgical processes related to type 2 cancer.\n\nA region-growing clustering technique is controlled by a seed point located in a volume slice, which is propagated to the rest of slices to segment the entire MSCT volumes. The validation of the obtained segmentations shows that the pathological representation obtained using the proposed method exhibits the highest correlation to the type 2 cancer shape traced by a clinician.\n\nIn future work, a more complete validation is necessary, including a comparison of estimated parameters describing the adenocarcinoma volume with respect to results obtained using other measurement techniques.\n\n\nData availability\n\nThe TCGA-STAD type 2 stomach cancer dataset series is available from: https://portal.gdc.cancer.gov/projects/TCGA-STAD, using the parameters TCGA-VQ-XXXX.\n\n\nSoftware availability\n\nEnhancement software available from/archived source code at time of publication: https://doi.org/10.5281/zenodo.125303924.\n\nMatLab script used to compute the Dice coefficient available from/archived source code at time of publication: https://doi.org/10.5281/zenodo.128990834.\n\nLicense: Creative Commons Attribution-ShareAlike 4.0 International.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Universidad Simón Bolívar, Colombia (grant C2011720117).\n\n\nAcknowledgements\n\nAuthors would like to thank the Universidad Simón Bolívar, Colombia, and the Investigation Dean’s Office of Universidad Nacional Experimental del Táchira, Venezuela for their support to this research.\n\n\nReferences\n\nRubin GD: Computed tomography: Revolutionizing the practice of medicine for 40 years. Radiology. 2014; 273(2 Suppl): S45–S74. PubMed Abstract | Publisher Full Text\n\nFlohr TG, Schaller S, Stierstorfer K, et al.: Multi-detector row CT systems and image-reconstruction techniques. Radiology. 2005; 235(3): 756–773. PubMed Abstract | Publisher Full Text\n\nGinat DT, Gupta R: Advances in computed tomography imaging technology. Annu Rev Biomed Eng. 2014; 16(1): 431–453. PubMed Abstract | Publisher Full Text\n\nPark SR, Lee JS, Kim CG, et al.: Endoscopic ultrasound and computed tomography in restaging and predicting prognosis after neoadjuvant chemotherapy in patients with locally advanced gastric cancer. Cancer. 2008; 112(11): 2368–2376. PubMed Abstract | Publisher Full Text\n\nHallinan JT, Venkatesh SK: Gastric carcinoma: imaging diagnosis, staging and assessment of treatment response. Cancer Imaging. 2013; 13(2): 212–227. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBankman I: Handbook of Medical Imaging: Processing and analysis. Academic Press, San Diego, 2000. Reference Source\n\nAngelini ED, Laine AF, Takuma S, et al.: LV volume quantification via spatiotemporal analysis of real-time 3-D echocardiography. IEEE Trans Med Imaging. 2001; 20(6): 457–469. PubMed Abstract | Publisher Full Text\n\nNelson TR, Elvins TT: Visualization of 3D ultrasound data. IEEE Comput Graph Appl. 1993; 13(6): 50–57. Publisher Full Text\n\nField MJ: Telemedicine: A Guide to Assessing Telecommunications in Health Care. Institute of Medicine, National Academy Press, Washington, 1996. PubMed Abstract | Publisher Full Text\n\nDICOM: Digital imaging and communication in medicine DICOM. NEMA Standards Publication, 1999. Reference Source\n\nFu KS, Mui JK: A survey on image segmentation. Pattern Recognit. 1981; 13(1): 3–16. Publisher Full Text\n\nDuda R, Hart P, Stork D: Pattern Classification. Wiley-Interscience, New York, 2000. Reference Source\n\nKervrann C, Heitz F: Statistical deformable model-based segmentation of image motion. IEEE Trans Image Process. 1999; 8(4): 583–588. PubMed Abstract | Publisher Full Text\n\nMitchell SC, Lelieveldt BP, van der Geest RJ, et al.: Multistage hybrid active appearance model matching: Segmentation of left and right ventricles in cardiac MR images. IEEE Trans Med Imaging. 2001; 20(5): 415–423. PubMed Abstract | Publisher Full Text\n\nBorrmann R: [Geschwulste des margens]. In Henke F, and Lubarsch O, editors, Handbuch spez pathol anat und hist, Springer-Verlag, 1926; 864–871.\n\nJapanese Gastric Cancer Association: Japanese classification of gastric carcinoma: 3rd English edition. Gastric Cancer. 2011; 14(2): 101–112. PubMed Abstract | Publisher Full Text\n\nKajitani T: The general rules for the gastric cancer study in surgery and pathology. Part I. Clinical classification. Jpn J Surg. 1981; 11(2): 127–139. PubMed Abstract | Publisher Full Text\n\nPlan of Action for the Prevention and Control of NCDs in the Americas 2013-2019. Technical Report Washington DC, Pan American Health Organization, 2014. Reference Source\n\nSeventieth World Health Assembly: Technical Report Geneva, World Health Organization, Resolutions and Decisions Annexes, 2017.\n\nSierra MS, Soerjomataram I, Antoni S, et al.: Cancer patterns and trends in Central and South America. Cancer Epidemiol. 2016; 44 Suppl 1: S23–S42. PubMed Abstract | Publisher Full Text\n\nLucchesi FR, Aredes ND: Radiology Data from The Cancer Genome Atlas Stomach Adenocarcinoma [TCGA-STAD] collection, 2016. The Cancer Imaging Archive. Publisher Full Text\n\nClark K, Vendt B, Smith K, et al.: The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging. 2013; 26(6): 1045–1057. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJaffe CC: Imaging and genomics: Is there a synergy? Radiology. 2012; 264(2): 329–331. PubMed Abstract | Publisher Full Text\n\nBravo: An image enhancement approach. Zenodo. 2018. Data Source\n\nJähne B: Digital Image Processing-Concepts, Algorithms, and Scientific Applications. Springer, Berlin, 2 edition, 1993. Reference Source\n\nRoa F, Bravo A, Valery A: Automated characterization of bacteria in confocal microscope images. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage AK, 2008; 1–8. Publisher Full Text\n\nBravo A, Medina R, Garreau M, et al.: An approach to coronary vessels detection in x-ray rotational angiography. In Müller C, Wong S, and La Cruz A, editors, IV Latin American Congress on Biomedical Engineering, Springer, 2007; 254–258. Publisher Full Text\n\nBravo A, Medina R, Díaz JA: A clustering based approach for automatic image segmentation: An application to biplane ventriculograms. In Martínez J, Carrasco J, and Kittler J, editors, Progress in Pattern Recognition, Image Analysis and Applications, Springer, 2006; 316–325. Publisher Full Text\n\nSchroeder W: The visualization toolkit: an object–oriented approach to 3D graphics. Kitware Clifton Park, N.Y, 2006. Reference Source\n\nAvila L, Kitware: The VTK User’s Guide. Kitware Inc, 2010. Reference Source\n\nSalomon D: Computer Graphics and Geometric Modeling. Springer Publishing Company, Incorporated, 2013. Reference Source\n\nLorensen WE, Cline HE: Marching cubes: A high resolution 3d surface construction algorithm. Comput Graph. 1987; 21(4): 163–169. Publisher Full Text\n\nDice L: Measures of the amount of ecologic association between species. Ecology. 1945; 26(3): 297–302. Publisher Full Text\n\nBravo A, Chacón G, Rodriguez J: Dice coefficient in MatLab (Version V1). Zenodo. 2018. Data Source"
}
|
[
{
"id": "36172",
"date": "17 Aug 2018",
"name": "Jorge Brieva",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors propose a method to asses a stomach tumor volume. They use multi-slice computerized tomography images that present a cancer of type 2.\n\nThe method include three steps: first, an image enhancement to correct non homogenities in the background.The enhancement is based on the LUT. Then, the segmentation is carried out by the region growing technique widelly used in segmentation problems. Finally, a reconstruction is made using the Toolkit VTK-6.3.0. The validation use the Dice coefficient to compare to the clinical manual segmentation.\n\nThe method is well explained, however more detail would be suitable to the enhancement algorithm that seems the core of the paper. In paticular, in the Equation of Flevel the index j is well used afeter the sum? More details would be benefics to understand the method.\n\nIn the segmentation method there are some parameters to tune? If it is the case it is suitable to add in the text.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Partly\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "4022",
"date": "09 Oct 2018",
"name": "Gerardo Chacon",
"role": "Author Response",
"response": "The authors would like to thank to Dr Jorge Brieva for his valuable comments which helped to improve the manuscript. In order to explain in more detail the enhancement algorithm, this algorithm constitutes the core of our paper, the procedure for constructing the LUT described in the Version 1 of our paper is modificated as follow 1. Choose n non-homogeneous images. 2. Determine the frequency percentage vector f^j = [f^j_0 f^j_1024 f^j_2048 f^j_3072 f^j_4095] (1<= j <= n) for each image in the histogram. These frequencies are associated with the gray level vector, Level = [Level_0 Level_1 Level_2 Level_3 Level_4] = [0 1024 2048 3072 4095]. 3. Obtain the average for the frequency percentage vector for all images, F_Level = F_(Level_k) =Sum_(j=1)^n f_Level_k^j, forall 0<= k <= 4. 4. Construct a LUT as a transfer function defined by the concatenation of four linear transformations. Each linear function is constructed using Input_LUT^k =Level_k, forall 0 <= k <= 4 and Output_LUT^k = Sum_(l=0)^k F_(Level_k ), forall 0 <= k <= 4, as input and output, respectively. On the other hand, in the section: Phase 2: Segmentation, the following information about the segmentation phase is added: The region growing algorithm of the multi-platform application VolView considers two parameters, namely, the neighborhood size for the region growing (l) and scale factor of grouping (tau). The clustering algorithm is applied by varying the value of these parameters in order to tune them. Additionally, in the section Results, the following text is added: In the parameters tunning procedure, for tau, all the values included in the interval [0 10] with a step size of 0.1 are evaluated, meanwhile l varies between 1 and 20 with step size of 1. For each set of parameters the resulting segmented structures are compared with the corresponding structures traced by a clinicians. The differences are estimated using the Dice coefficient. The parameters value that maximized the Dice coefficient are chosen for the proposed segmentation method.These same parameters values are considered to segment others dataset."
}
]
},
{
"id": "37689",
"date": "28 Sep 2018",
"name": "Fred Prior",
"expertise": [
"Reviewer Expertise Imaging informatics and quantitative image analysis including radiomic analysis of cancer lesions."
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe data used in this study is available by not from the source noted in the paper. The data is available from the Cancer Imaging Archive (https://wiki.cancerimagingarchive.net/display/Public/TCGA-STAD) not from the GDC portal as described.\nThe results section states that DICE coefficients are used for comparison of the machine segmented lesions and human segmented lesions which serve as a model of truth. However the results are reported as: \"The error obtained (mean +/- standard deviation) for the eight datasets of the 3-D images is 91.54% +/- 5.26%, with a maximum value of 97.31% and a minimum value of 83.47%. I believe these may be the calculated DICE coefficients and not \"error\" estimates. Errors between the two segmented lesions this large would indicate complete failure of the technique.\nIn general the techniques proposed are not novel but rather application of well known and proven techniques to a small data set.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "4023",
"date": "09 Oct 2018",
"name": "Gerardo Chacon",
"role": "Author Response",
"response": "The authors would like to thank to Dr Fred Prior for his comments about our paper. The hyperlink to the data source noted in the Version 1 of our paper, is modified to https://wiki.cancerimagingarchive.net/display/Public/TCGA-STAD (Cancer Imaging Archive). In the section Results, the last paragraph is modified according to the following text: The comparison between the segmented structures and the cancer shapes delineated by the clinician is performed based on the Dice coefficient. The Dice coefficient obtained (mean +/- standard deviation) for the eight datasets of the 3-D images is 91.54% +/- 5.26%, with a maximum value of 97.31% and a minimum value of 83.47%. Accordingly, the maximum estimated error of segmentations corresponds with 16.53%, and the minimum estimated error with 2.69%. Additionally, in the section Conclusions, the last paragraph is modified as follows: In future work, a more complete validation is necessary, considering a more-large image datasets, and including a comparison of estimated parameters describing the adenocarcinoma volume with respect to results obtained using other measurement techniques. As a final remark, we would like to comment, that the proposed methods for enhancement and segmentation the CT gastric images are conceptually simple and require minimal user interaction, which allows decreasing the intersubjective probability or personal uncertainty introduced by the methods users. And, as verified with the help of the comments of another of the referees the enhancement algorithm is the core of the paper. This algorithm allows to control the non-homogeneities of the abdominal tomography images.produced in the interface between the mucous and the contrast agent and consequently, the technique allows to improve the information associated with the unclearly differentiated edges of the tumor.This improvement is achieved by a scheme based on a lookup table. This framework focuses on analyzing the average information of a sample images and then the optimal transfer function useful to diminish the impact of non-homogeneities in the segmentation is constructed. Hence, that we consider our method, conceptually simple and of low computational cost. In this sense, an interesting image processing technique is proposed. And finally, the application of our method to more-large image data sets rather than a research process corresponds to the putting process into practice of our method. The Authors."
}
]
}
] | 1
|
https://f1000research.com/articles/7-1098
|
https://f1000research.com/articles/7-336/v1
|
19 Mar 18
|
{
"type": "Research Article",
"title": "Needle lost in the haystack: multiple reaction monitoring fails to detect Treponema pallidum candidate protein biomarkers in plasma and urine samples from individuals with syphilis",
"authors": [
"Geert A. Van Raemdonck",
"Kara K. Osbak",
"Xaveer Van Ostade",
"Chris R. Kenyon",
"Geert A. Van Raemdonck",
"Xaveer Van Ostade",
"Chris R. Kenyon"
],
"abstract": "Background: Current syphilis diagnostic strategies are lacking a sensitive manner of directly detecting Treponema pallidum antigens. A diagnostic test that could directly detect T. pallidum antigens in individuals with syphilis would be of considerable clinical utility, especially for the diagnosis of reinfections and for post-treatment serological follow-up. Methods: In this study, 11 candidate T. pallidum biomarker proteins were chosen according to their physiochemical characteristics, T. pallidum specificity and predicted abundance. Thirty isotopically labelled proteotypic surrogate peptides (hPTPs) were synthesized and incorporated into a scheduled multiple reaction monitoring assay. Protein extracts from undepleted/unenriched plasma (N = 18) and urine (N = 4) samples from 18 individuals with syphilis in various clinical stages were tryptically digested, spiked with the hPTP mixture and analysed with a triple quadruple mass spectrometer. Results: No endogenous PTPs corresponding to the eleven candidate biomarkers were detected in any samples analysed. To estimate the Limit of Detection (LOD) of a comparably sensitive mass spectrometer (LTQ-Orbitrap), two dilution series of rabbit cultured purified T. pallidum were prepared in PBS. Polyclonal anti-T. pallidum antibodies coupled to magnetic Dynabeads were used to enrich one sample series; no LOD improvement was found compared to the unenriched series. The estimated LOD of MS instruments is 300 T. pallidum/ml in PBS. Conclusions: Biomarker protein detection likely failed due to the low (femtomoles/liter) predicted concentration of T. pallidum proteins. Alternative sample preparation strategies may improve the detectability of T. pallidum proteins in biofluids.",
"keywords": [
"MRM",
"Multiple Reaction Monitoring",
"targeted proteomics",
"Treponema pallidum",
"syphilis",
"biomarker discovery",
"antigen test",
"plasma"
],
"content": "List of abbreviations\n\nhPTPs Isotopically labelled proteotypic surrogate peptides\n\nLOD Limit of detection\n\nMSM Men who have sex with men\n\nPCR Polymerase chain reaction\n\nTPPA Treponema pallidum Particle Agglutination test\n\nRPR Rapid plasma reagin test\n\nCOG Clusters of Orthologous Groups\n\nFA Formic acid\n\nPBS Phosphate buffered saline\n\nNSAF Normalized spectral abundance factor\n\nIQR Interquartile range\n\nSISCAPA Stable Isotope Standards and Capture by Anti-Peptide Antibodies\n\n\nIntroduction\n\nTreponema pallidum ssp. pallidum (T. pallidum), a non-culturable microaerophilic spirochete, is responsible for more than 8 million new cases of syphilis per year1. There has been a resurgence of syphilis in a number of world regions over the last two decades1–3. In Europe2 and North America3, this increase has been most marked in men who have sex with men (MSM). A striking feature of these outbreaks has been the increasing proportion of cases that are occurring in patients with a previous diagnosis of syphilis4,5. Patients with reinfections are more likely to present with asymptomatic or less symptomatic disease4, hence the diagnosis of reinfection is wholly dependent on subtle changes in serological tests6. Two types of serological tests are used to diagnose syphilis: treponemal tests detect antibodies to T. pallidum and non-treponemal tests, such as the Rapid plasma reagin (RPR) test, detect agglutination secondary to the presence of anti-lipoidal antibodies reactive to material released from damaged host cells and possibly cardiolipin released from T. pallidum7. Treponemal tests remain positive for life and are therefore of no use in the diagnosis of reinfection. Non-treponemal tests are used for syphilis post-treatment follow-up and diagnosis of reinfection. A wide range of factors can result in increases in test titers, causing syphilis to be over-diagnosed and unnecessarily treated6,8–10. Direct T. pallidum detection techniques, including various nucleic acid amplification tests, have been developed, but apart from testing of primary ulcer specimens the sensitivity of these techniques is low11. Even in the setting of secondary syphilis, when there is a high T. pallidum load in the blood12, the sensitivity of polymerase chain reaction (PCR) tests reaches only 52 % on serum specimens11,13.\n\nThe T. pallidum genome, through evolutionary reduction, is one of the smallest of the human bacterial pathogens, with a predicted 1044 open reading frames14. Approximately half of the predicted proteins have been detected through MS techniques15,16, including the semi-quantification of T. pallidum proteins using spectral counting16. A T. pallidum transcriptome study demonstrated that almost all genes were expressed during peak rabbit experimental infection17. This maximum utilization of the genome, well characterized proteome, and swift invasion of the organism into the bloodstream (within 24 hours after infection18) make this pathogen an ideal candidate for antigen diagnostic assay development. A variety of antigen tests against other pathogens have been designed for clinical samples such as blood, cerebrospinal fluid, faeces and urine; and these have proven their utility in the diagnosis and assessment of therapeutic response in a number of infections, including Helicobacter pylori19, Cryptococcus neoformans20, Cryptosporidium ssp.21, Entamoeba histolytica22, Ebola virus23 and Mycobacteria tuberculosis24. If a highly sensitive and specific test could be developed that is able to confirm the presence or absence of T. pallidum in the body then this would be of considerable utility in the diagnosis of syphilis reinfections and in assessing therapeutic response. It could also be useful for the diagnosis of neuro- and congenital syphilis – two diagnoses where contemporary tests are suboptimal25.\n\nDuring the last decade, advanced MS-based proteomics platforms have emerged as mainstay bioanalytical tools for a broad range of clinical applications, including targeted protein identification26 and bacteria identification and typing27. Particularly the AQUA workflow28,29, with its use of stable isotopically labelled standard proteotypic peptides (henceforth referred to as ‘heavy’ PTPs or hPTPs) and selected/multiple reaction monitoring-mass spectrometry (SRM/MRM MS), has emerged as a powerful technique for the fast determination of multiple protein concentrations in highly complex sample matrixes such as urine (reviewed by Mermelekas et al.30) and plasma (reviewed by Pernemalm and Lehtiö31). Precise quantitation of proteins is possible by using hPTPs as internal standards that correspond to endogenous peptides created during the enzymatic digestion of the sample of interest. When combined, the endogenous and synthetic peptides elute together chromatographically and ionize with the same efficiency. Since the quantity of the labelled peptide is known, the absolute quantity of the targeted native protein can be determined by comparing MRM hPTP/endogenous peak areas. The precision and utility of this highly sensitive multiplexed method has been demonstrated on undepleted/ unenriched plasma for the detection of a panel of human cardiovascular disease32 and cancer33 biomarkers with a detection capability of four orders of magnitude (103–104 range in protein concentration) and up to femtomolar level sensitivity in plasma34. Recently, a panel of 136 cancer candidate biomarkers was interrogated in unenriched urine samples using MRM, revealing detection limits of up to 25 picogram/ml urine35.\n\nWith regards to infectious disease biomarker studies, MS-based approaches identified candidate biomarkers in urine for Leishmania sp.36, which has led to the development of a urine capture ELISA diagnostic test37. Considerable progress has also been made in Mycobacterium tuberculosis38–40 biomarker studies; recent advancements include the detection of M. tuberculosis in urine using IgG capture, immunodepletion and MRM methods41 and MRM assay of exosomes isolated from serum samples from patients with tuberculosis38.\n\nIn this study, we investigated if T. pallidum proteins could be detected in plasma and urine samples from individuals with syphilis using a targeted proteomics (MRM) approach. Successful development of a T. pallidum antigen test will most likely be contingent upon the simultaneous detection of multiple protein biomarkers to comprehensively cover different stages of disease. Eleven T. pallidum protein biomarkers were chosen based on a predicted specificity, high predicted abundance, and physiochemical properties. Thirty surrogate hPTPs were synthesized corresponding to eleven candidate T. pallidum biomarkers. Analysis of eighteen plasma and four urine samples revealed no detectable MRM signal for the endogenous peptides from the biomarkers of interest. This is likely due to the extremely low (femtomoles per liter) predicted concentration of bacterial proteins in the samples of interest, or the fact that the biomarkers are not expressed during infection. T. pallidum spiking experiments established a MS detection limit of 300 bacteria/ml in PBS; polyclonal anti-T. pallidum magnetic bead enrichment did not improve the protein detectability.\n\n\nMethods\n\nBetween January 2014 and August 2015, 120 patients attending the Institute of Tropical Medicine Antwerp clinic, over the age of 17 years, and in whom a new diagnosis of syphilis was made and had not received antibiotics in the preceding thirty days, were recruited into the cohort study. Thirty HIV-positive controls, in whom the diagnosis of syphilis was excluded via serological and PCR testing, were also recruited. The diagnosis and staging of syphilis was according to the Centers for Disease Control and Prevention classification42, and treatment was administered according to European guidelines43. All patient sera were tested for syphilis using a RPR test (BD Macro-Vue RPR card test, Becton, Dickinson and Co., Sparks, MD, United States of America (USA)) and an antibody detection Treponema pallidum Particle Agglutination test (SERODIA-TPPA Fujirebio Inc., Tokyo, Japan). A PCR test targeting T. pallidum polA was also performed on serum44 and whole blood samples were tested for multiple gene targets45, as previously described. Selection criteria of participants from the cohort study for the MRM assay analysis included a range of syphilis clinical stages and prioritized predicted high bacterial loads, as demonstrated by positive PCR tests and/or high RPR titres. Patients with early stage syphilis (primary, secondary, early latent) that were plasma and/or whole blood PCR positive for T. pallidum were expected to have the highest bacterial load11,12.\n\nPlasma was collected immediately before Benzathine Penicillin G intramuscular injection using 7.5 ml EDTA-coated blood collection tubes (Sarstedt Monovette, Nümbrecht, Germany). We refer to these samples as the pre-penicillin samples. A selection of randomly selected patients participated in an additional blood draw three hours after penicillin treatment since studies have shown penicillin to be fast acting on T. pallidum, leading to consequent cell lysis and antigen release46. These samples are termed the post-penicillin samples. Plasma was chosen for the MRM assay according to HUPO guidelines47. Protease inhibitors were not added to the plasma samples since previous studies did not demonstrate a significant higher protein yield with treated samples48 and peptides could inadvertently be modified49. Plasma were subjected to dual centrifugation in an Eppendorf 22331 centrifuge (Hamburg, Germany) in an effort to minimize cellular contamination: whole blood was centrifuged at 2000 g for 10 minutes at ambient temperature, followed by transfer of the plasma fraction to a 50 ml falcon tube and centrifugation at 2400 g for 15 minutes. All plasma were processed and aliquoted into cryovials for storage at -80 °C in a long-term freezer unit (Eppendorf U725-G Innova New Brunswick, Hamburg, Germany) until further testing. Mid-stream random-void urine samples were collected and processed following HUPO guidelines50, including centrifugation for 10 minutes at 2000 g at ambient temperature in order to remove insoluble contents such as cells and casts. Urine was aliquoted into 15 ml falcon tubes and stored at -80 °C until further testing. All plasma and urine samples were processed within three hours of collection and were only subjected to one freeze thaw cycle.\n\nIn a previous descriptive study we used non-gel based complementary MS techniques to characterize the proteome of in vivo rabbit cultured T. pallidum16. Candidate T. pallidum biomarker proteins for the MRM assay were chosen based on the following specific criteria: relative protein abundance (based on semi-quantitative spectral counting techniques16), Clusters of Orthologous Groups (COG) functional categorization, microarray transcriptome data17, protein size, physicochemical properties (i.e. previously detected by MS), predicted subcellular localization16 and literature review. Each of the candidate biomarkers were digested in silico by subjecting the FASTA-formatted sequences to tryptic digestion, assuming 100 % digestion efficiency. Proteotypic peptides (PTPs) corresponding to these proteins were determined using ESPPredictor51 and pBLAST52; analysis of the protein and PTPs was performed to determine possible homology with other bacterial species and human proteins. After PTP selection was finalized, isotopically labelled synthetic peptide standards (hPTPs) corresponding to the selected PTPs were synthesized (Heavy Peptide™ AQUA Basic with > 95 % purity; Thermo Fisher Scientific, Ulm, Germany).\n\nProtein concentrations of urine and plasma samples were determined based on the area under curve at 214 nm using a RP-C4 column (Vydac 214TP5415; 4.6×150 mm, particle size 5 μm; Alltech Associates Inc., Lokeren, Belgium) coupled to an Alliance e2695 HPLC system equipped with a 996 PDA detector (Waters Corporation, Milford, MA, USA). For each sample, 250 µg of protein was precipitated by adding six volumes of ice cold LC-MS grade acetone (Biosolve, Valkenswaard, the Netherlands) followed by overnight incubation in freezer unit (Liebherr, Bulle, Switzerland) at -20 °C. In all cases, lo-bind Eppendorf tubes (Eppendorf, Hamburg, Germany) were used to ensure high recovery rates of proteins and peptides. Protein pellets were re-suspended in 50 mM Tris-HCl/6 M urea/5 mM dithiothreitol /10 % beta-mercaptoethanol (25 µL/100 µg protein) at pH 8.7. For the denaturation and reduction process all samples were incubated at 65 °C in a hot water bath for 1 hour. Subsequently, proteins in all fractions were diluted in 50 mM Tris-HCl/ 1 mM CaCl2 (75 µL/100 µg protein) and alkylated by adding 200 mM iodoacetamide (10 µL/100 µg protein) during 1 hour at ambient temperature and protected from light. Proteomics-grade modified trypsin (Promega, Madison, WI, US) was added at a 30:1 protein-to-enzyme ratio. After incubation at 37 °C in a hot water bath for 18 hours the digestion was stopped by freezing the samples. Protein digests were desalted by SPE using GracePure SPE C18-Max (50 mg) (W. R. Grace & Co., Columbia, MD, US) RP cartridges and a vacuum manifold. SPE cartridges were conditioned with 100 % methanol and equilibrated with 100 % LC/MS grade H2O and 0.1 % formic acid (FA). After loading the complete acidified (0.1 % FA) tryptic digest, peptides were washed with 10 % methanol and eluted with 40 % methanol/ 40 % acetonitrile (ACN) and 0.1 % FA. Eluted peptides were lyophilized and frozen at -20 °C until further analysis. Immediately before analysis, lyophilized digests were resuspended in 5 % ACN/0.1 % FA and spiked with a mixture of all hPTPs.\n\nOptimization of each PTP was performed on a triple quadruple mass spectrometer (Waters Xevo TQ, Waters Corporation, Milford, MA, US) in order to obtain the most intense transitions. The capillary voltage was tuned to approximately 2 kV with a source temperature of 150 °C. Desolvation temperature was set at 400 °C with a nitrogen gas flow of 800 L/h. Cone voltage, collision energy and dwell times were optimized for each of the PTPs. All PTPs were dissolved in mobile phase A (MP-A), containing 5 % ACN (LC/MS grade) and 0.1 % FA. For each of the peptides individually, the Limit of Detection (LOD) was determined by performing a dilution series in MP-A. Based on these concentrations, a mixture of all hPTPs was made. A balanced hPTP mixture has been shown to increase quantification accuracy and reproducibility compared to an equimolar mixture in previous studies34. To check for possible suppressive effects of the plasma matrix, the hPTP mixture was spiked into plasma from a control study subject. A balanced mixture of hPTP (concentrations detailed in Supplementary File 1) was spiked into 50 µg of plasma digest. Chromatographic separation of the plasma and urine samples was performed on a RP-C18 UPLC column (Waters, CSH 150 × 2.1 mm, 1.7 µm at 35 °C) connected to an Acquity UPLC system (Waters Corporation, Milford, MA, USA). In order to separate all peptides as best as possible, an optimized linear gradient of Mobile Phase B (MP-B) (0.1 % FA in 100 % ACN) was applied: 5 % MP-B during 1 min and from 5 to 35 % MP-B in 5 min, followed by a steep increase to 100 % MP-B in 1 min, all at a flow rate of 300 µL/min. Based on the specific retention times of each peptide, three scheduled MRM runs of 10 minutes were generated, each of them containing 20 MS1 channels (10 endogenous (T. pallidum) PTPs without isotopic label and 10 channels with a synthetic hPTP equivalent). At least three transitions (ion pairs) were selected for each peptide of interest. For each scheduled MRM analysis, 50 µg of peptides (injection loop of 5 µL) per plasma/urine sample were loaded onto the analytical column. In addition to an extensive needle wash after each injection, a blank run was performed between two subsequent clinical samples to prevent carry-over effects. Data acquisition was controlled by MassLynx version 4.1, while targeted datasets were analysed by TargetLynx, which is part of MassLynx (Waters Corporation, Milford, MA, USA). All Xevo TQ MS raw spectral files are available at PeptideAtlas53 with the identifier PASS00978.\n\nT. pallidum protein enrichment was performed using magnetic beads (Dynabeads® M-270, Life Technologies, CA, USA) coated with biotin-conjugated polyclonal T. pallidum- specific antibodies (PA1-73103, Thermo Fisher Scientific, CA, USA) through streptavidin-biotin conjugation. According to the manufacturer’s protocol, 10 µg of antibody was used to bind 1 mg of beads (approximately 5 × 107 beads).\n\nIn vivo rabbit cultured purified T. pallidum DAL-1 strain extracts54,55 were kindly provided by the group of David Šmajs from the Masaryk University, Czech Republic. The original concentration of the T. pallidum extract was approximately 106 bacteria/ml as quantified under darkfield microscopy using a Olympus BX41 (Olympus Corporation, Tokyo, Japan) equipped with darkfield microscope condenser DCW 1.4-1.2; magnification 10×40. Samples were stored in 1 ml phosphate buffered saline (PBS) and only subjected to one freeze-thaw cycle. Two dilution series of T. pallidum were prepared, each time starting in 1 ml of PBS and finally equating to eight approximate bacterial concentrations: 104, 103, 300, 100, 33, 10, 3 and 0 bacteria/ml.\n\nFor one dilution series, each of the eight fractions were incubated with a constant amount (~105) of magnetic beads coated with polyclonal anti-T. pallidum antibodies. After incubation for two hours at 4° C and magnetic separation, the supernatant was discarded and beads were washed three times with PBS. To lyse the antibody bound bacteria, 1 ml of PBS was added to each bead sample, these were sonicated on ice using a Sonics Vibra Cell VC130 (Sonics and Materials Inc., Newtown, CT, USA) (two times 30 seconds with an amplitude of 50 %). The bead fraction was retained (retentant) after sonication by using magnetic separation. Released proteins were precipitated adding ice-cold acetone and incubated overnight at -20 °C. Tryptic digestion was performed, following the aforementioned procedure, on both the precipitated proteins (supernatant) and directly “on-bead” (retentate), to test for possible unreleased proteins during sonication. For the second dilution series (unenriched), 1 ml was directly drawn from each of the eight samples. The samples from this series were also sonicated on ice (two times 30 seconds with an amplitude of 50 %) to lyse the bacteria. Released proteins were then acetone precipitated and subsequently digested, in conformance with the other parallel series procedure.\n\nPeptide mixtures were separated by RPLC on a Waters nano-UPLC system using a nanoACQUITY BEH C18 Trap column (100 Å, 5 μm, 180 μm × 20 mm) connected to a nanoACQUITY BEH C18 analytical Column (130 Å, 1.7 μm, 100 μm × 100 mm) (Waters Corporation, Milford, MA, USA). Peptides were dissolved in MP-A, containing 2 % ACN and 0.1 % FA and spiked with 20 fmol [Glu1]-fibrinopeptide B, which serves as an internal calibrant. A linear gradient of MP-B (0.1 % FA in 98 % ACN) from 2 to 45 % MP-B in 45 min, followed by a steep increase to 95 % MP-B in 2 min at a flow rate of 400 nl/min. The nano-LC was coupled online with a LTQ Orbitrap Velos (Thermo Scientific, San Jose, CA, US) mass spectrometer using a PicoTip Emitter (New Objective, Woburn, MA, US) linked to a nanospray ion source. The mass spectrometer was set up in a data dependent acquisition MS/MS mode where a full scan spectrum (350–2500 m/z, resolution of 60.000) was followed by a maximum of ten CID tandem mass spectra (100 to 2000 m/z). Peptide ions were selected as the twenty most intense peaks of the MS scan. CID scans were acquired in the LTQ ion trap part of the mass spectrometer with normalized collision energy of 32 %.\n\nObtained spectra were screened against the T. pallidum reference and resequenced databases (UniProt ID proteome UP00001425914 and UP00000081156 using the MASCOT search engine (Matrix Science; version 2.1.03) based on the digestion enzyme trypsin. Carbamidomethylation of cysteines was listed as a fixed modification, while methionine oxidation was set as a variable modification. A maximum of one missed cleavage was tolerated. Mass tolerance was set to 10 ppm for the precursors and 0.8 Da for the fragment ions. False discovery rate was set at 5 %. Scaffold Q+ (version 4.6.2, Proteome Software Inc., Portland, OR, US) was used to validate MS/MS-based peptide and protein identifications. Protein identifications were accepted if they could be established at greater than 95.0 % probability according to the protein prophet algorithm57.\n\nAll LTQ-Orbitrap MS/MS raw spectral data is available at PeptideAtlas53 with the identifier PASS00978.\n\n\nResults\n\nEighteen syphilis-infected study participants were selected for the MRM assay analyses (Table 1). All participants were male and identified as MSM. A third of the participants (6/18; 33 %) were HIV positive. Five (28 %) presented with primary, eleven secondary (61 %), and two early latent (11 %) stage disease. Thirteen participants were confirmed T. pallidum-positive by serum and/or whole blood PCR testing. Four participants had indeterminate PCR results, meaning their sample was weakly positive. A second confirmatory PCR was not performed on these samples. One patient was negative for both whole blood and serum PCR. All participants tested positive with both the RPR and TPPA tests. The median RPR value was 1/64 (Interquartile range (IQR): 1/16- 1/128). In total, 22 samples were analysed, including N = 12 pre-penicillin treatment plasma, N = 6 post-penicillin treatment plasma and N = 4 pre-penicillin treatment urine samples.\n\nLegend: #- patients were treated with intramuscular injection with 2.4 MU Benzathine penicillin G; Indet.- indeterminate PCR result, second confirmatory PCR was not performed; ND- not done\n\nEleven T. pallidum proteins were selected as candidate biomarkers (Table 2). Most selected biomarkers had high normalized spectral abundance factor (NSAF) scores according to our previous study16 (median 4.02; IQR: 1.97-6.97) and high microarray signal ratios17 (median 3.05; IQR: 0.74-6.8). The median protein molecular weight was 39 kDa (IQR: 28-81). Two proteins were predicted to be located in the flagellum (TP_0249 and TP_0792), two in the ribosome (TP_0250b and TP_0244) and the subcellular localization of five proteins was unknown. Protein TP_0326, a BamA orthologue, has been experimentally shown58–60 to be localized in the outer membrane. A typical target for PCR assays is polA, coding protein TP_010561. One protein, Peptidyl-prolyl cis-trans isomerase (TP_0862) was found in a previous proteomics study where it demonstrated moderate reactivity during immunoblot experiments with human and rabbit T. pallidum infected serum15. Protein TprG (TP_0317) is part of the paralogous tpr gene family that encodes candidate virulence factors62 and is partially homologous to Tpr E/J. According to pBLAST analysis, all chosen biomarker proteins and corresponding PTPs did not demonstrate high homology with other pathogens, non-pathogenic commensal bacterial or human proteins (data not shown). One to three corresponding well-suited PTPs were selected for each biomarker, for a total of 30 PTPs. Details pertaining to these are provided in Table 2.\n\nLegend: *- UniProt proteome ID UP000014259; &- ORF was not annotated in the re-sequenced Nichols strain genome due to its length below the 150 bp limit14; #- underlined/bold amino acids indicate stable isotope labelled residues; $- peptide is homologous in Tpr E/G/J protein sequences; @- subcellular location as reported in Osbak et al.16; NK- not known; NSAF- normalized spectral abundance factor; COG- clusters of orthologous groups; COG categories: L- Replication, recombination and repair, M- Cell wall/membrane/envelope biogenesis; N- Cell motility; O- Posttranslational modification, protein turnover, chaperones; J- Translation, ribosomal structure and biogenesis; S- Function unknown; U- Intracellular trafficking, secretion, and vesicular transport.\n\nThe LOD for each peptide was determined individually by performing a dilution series of MP-A whereby the median LOD was 68.5 (IQR 14.2-176.7) picomoles. Once the peptide mixture composition was optimized based on the LOD, 2 µL of this mixture (Supplementary File 1) was spiked into 50 µg plasma from a control patient whereby no significant variations in the signal of the hPTP transitions could be detected, indicating that there was no evidence of transition interference from the plasma. After optimizing each of the PTPs, three different sets of transitions were combined in an MRM assay based on their chromatographic retention time, as detailed in Supplementary File 1. The experiments contained a total of 141 targeted ion pairs (transitions) corresponding to 30 PTPs from eleven T. pallidum proteins. Ten of the eleven proteins were represented by two or more (h)PTPs (Table 2/ Supplementary File 1). In total, three scheduled MRM assays of 10 minutes, each containing 20 peptides (10 endogenous (T. pallidum) peptides and 10 hPTP standards) were developed. These assays were evaluated based on a balanced mixture of all 30 hPTPs standards. Unfortunately, although each of the 30 spiked hPTPs could be detected, none of the selected endogenous T. pallidum peptides could be identified in any of the MRM assays (Figure 1; Supplementary File 2*).\n\n(a) synthetic hPTPs, even numbers and (b) endogenous (T. pallidum) PTPs, odd numbers; gradient 1 of 3. For each peptide the number of selected transitions (channels) is reported. The x-axis shows the chromatographic retention time of the corresponding peptide while the y-axis shows the relative intensity of the MS2 signal. Note: Signal fluctuations present in the ‘endogenous’ PTP chromatogram are always the result of just one transition, often coupled with a shift in retention time and differing m/z-values differ from the hPTP run, thus these are considered to be noise.\n\nTwo T. pallidum spiking dilution series were prepared in PBS and subjected to LTQ-Orbitrap MS/MS analysis in order to estimate the LOD of MS detection. One of the series was subjected to an additional polyclonal antibody coupled magnetic bead enrichment step, including sonication of the beads and subsequent separate measurement of the lysate and on-bead digestion retentate (Figure 2).\n\nIn total, eight different concentrations of T. pallidum (from 104 to 0 bacteria/ml PBS) were treated in three different ways i) T. pallidum was enriched using magnetic beads coated with polyclonal anti-T. pallidum antibodies and lysed by sonication for release of T. pallidum proteins in the supernatant. Acetone precipitated proteins were trypsinized; ii) In order to detect any remaining protein on the beads, the beads were also trypsinized (retentant on-bead trypsinization); iii) As a control, non-enriched samples were sonicated and immediately trypsinized. *-proteins selected as candidate biomarkers in this study. All samples were analysed by an LTQ-Orbitrap mass spectrometer.\n\nTwo unique T. pallidum proteins, Cytoplasmic filament protein A (TP_0748) and Lipoprotein antigen Tp47 (TP_0547), were found in the 300 bacteria/ml fraction in the enriched and unenriched samples, respectively (Figure 3; Supplementary File 3). Therefore, the LOD based on a high-resolution LTQ-Orbitrap instrument was approximately 300 bacteria/ml PBS for both the antibody enriched and unenriched samples, meaning there was no significant improvement in LOD using bead enrichment. No proteins were detected in any sample concentrations for the enriched bacterial lysate (supernatant) fraction. Possibly, the sonication conditions were not harsh enough to lyse the bacteria on the beads and lysis was mainly the results of trypsin treatment under denaturing conditions. In total, eight unique T. pallidum proteins were found in both the unenriched and enriched retentate dilution series: 60 kDa chaperonin (TP_0030), Flagellar filament outer layer protein flaA1 (TP_0249), Alkyl hydroperoxide reductase (TP_0509), Lipoprotein antigen Tp47 (TP_0574), Galactose ABC superfamily ATP binding cassette transporter, binding protein (TP_0684), Cytoplasmic filament protein A (TP_0748) and the Flagellar filament core proteins flaB1/B3 (TP_0792/TP_0870). Four proteins, Lipoprotein, 15 kDa (TP_0171), 10 kDa chaperonin (TP_1013), Elongation factor Tu (TP_0187) and Tp34 lipoprotein (TP_0971) were only found in the unenriched and enriched series, respectively. Ten unique T. pallidum proteins were found in the highest concentration (104 bacteria/ml) for both the enriched retentate sample (N = 10) and non-enriched sample (N = 10). Five unique T. pallidum proteins were found in the 103 bacteria/ml sample, including N = 4 in the unenriched and N = 4 in the retentate fractions. A peptide (LSGGVAVIK) related to 60 kDa chaperonin (TP_0030) was detected in the low concentration (100/ 33/ 10/ 3 bacteria/ml) and in the negative control samples of the enriched sample series. This was likely a false-positive non-specific peptide secondary to rabbit protein contamination since this short peptide sequence is closely homologous to the Oryctolagus cuniculus (rabbit) 60 kDa heat shock protein, or could have originated from the beads or antibodies. As a result, it has been excluded from the analysis. Three T. pallidum proteins detected in both the enriched and unenriched sample series were also biomarker candidates tested in the MRM assay experiments: Flagellar filament core protein flaB2 (TP_0792), Cytoplasmic filament protein A (TP_0748) and the Flagellar filament outer layer protein flaA1 (TP_0249). Detailed information about the identified proteins, peptides, coverage and search parameters can be found in Supplementary File 3. Rough concentration calculations estimated that our target PTPs would be present in the femtomoles per liter range in human T. pallidum infection (calculations presented in Supplementary File 4).\n\n\nDiscussion\n\nThe T. pallidum MRM assay designed in this study failed to detect any of the 30 targeted proteotypic peptides related to eleven candidate T. pallidum protein biomarkers in eighteen plasma and four urine samples from individuals with syphilis. A number of explanations are possible. The foremost is the extremely low predicted concentration of bacterial proteins compared to host proteins. To a large extent our estimates of T. pallidum bacterial load in blood are based on molecular studies. In one of the largest studies, Tipple et al. found that median copy numbers of Lipoprotein antigen Tp47 (TP_0574) DNA detectable per milliliter of whole blood was 127, 516 and 70 in primary, secondary and latent syphilis, respectively12. Other studies have produced comparable results46,63,64, with the exception of a recent study that found a median of 1.4 × 105 T. pallidum/ml in whole blood from patients with secondary syphilis65.\n\nThe concentration of T. pallidum in blood according to these PCR-based studies is lower compared to our estimated LOD in a shotgun experiment on diluted samples (300 T. pallidum/ml) since we would need a 500x higher concentration (same amount of proteins from 300 T. pallidum in 1 ml vs. 2 µl) to detect the 300 T. pallidum/ml (see Supplementary File 4). Despite this outcome, we were hoping to detect T. pallidum proteins in the plasma or urine of some syphilis patients because i) MRM measurements are generally more sensitive than shotgun experiments since scanning times are drastically reduced and ii) the amounts from Tipple et al.12 were averages so we hypothesized that some patients (especially those with secondary syphilis) might have high T. pallidum levels detectable by MRM. These results could then motivate us to develop an (immuno)assay, capable of detecting the proteins even at low concentrations.\n\nLittle difference in T. pallidum abundance has been found between whole blood, plasma or serum11. Not much is known about the persistence of T. pallidum in the human urinary tract and to our knowledge no studies have quantified T. pallidum in the urine of syphilis-infected patients. However, even if T. pallidum does not consistently persist in the urinary tract, bacterial proteins present in the blood could be filtered through the glomerulus, ending up in the urine either intact or as peptide fragments, depending on the size of the protein and state of proteolysis66.\n\nThese considerations suggest that detection of T. pallidum proteins in human biofluids may not be possible without additional steps such as front-end immunoaffinity depletion67, two-dimensional LC separation68 and/or selective enrichment of target proteins/peptides (as reviewed by Shi et al.69). These techniques, or combinations thereof, have allowed the detection of low abundance proteins up to the low- to sub-nanogram/ml level69,70 in clinical samples. For example, to reduce the wide dynamic range of plasma proteins, multicomponent single-step immunoaffinity depletion of high-abundant (host) proteins can allow up to a 10-20-fold enrichment of low-abundant proteins due to the depletion of 90–95 % of the total protein mass67. However, of particular concern with this approach is the possibility of concomitant removal of low-abundance proteins due to protein binding to the antibodies or high-abundant proteins, as shown in a study that systematically analysed the antibody bound (high-abundant) protein fraction which found that this fraction contained 101 proteins at a high degree of confidence71. T. pallidum has a high binding affinity for constituents of serum and host cells, including laminin72, fibronectin73,74 and albumin75, which may lead to unintentional depletion of targeted proteins if human protein specific immunodepletion would be applied. Furthermore, targeted mass spectrometric immunoassays (MSIA) that use surface-immobilized antibodies to affinity retrieve proteins from biological samples have proven their utility for clinical applications76–78. In our study, magnet bead coupled polyclonal anti- T. pallidum antibodies failed to significantly detect more T. pallidum proteins compared to the unenriched dilution series. Antibody effectivity is dictated by binding affinity; we used commercial antibodies that were to our knowledge not previously characterized as to their binding affinity or targeted proteins. Furthermore, it is unlikely that the polyclonal antibodies would bind a large range of proteins since few (<5 %) T. pallidum proteins are immunogenic15,79. The fact that T. pallidum can remain in ‘plain sight’ without invoking immune defences80, together with the very low amount of outer membrane proteins compared to other human pathogens81, also suggests that antibody enrichment of whole organisms and/or proteins would probably not be an effective strategy. Peptide-level immunoenrichment, also known as the ‘Stable Isotope Standards and Capture by Anti-Peptide Antibodies’ (SISCAPA) method developed by Anderson et al.82 has shown considerable promise as a high-throughput, automated, highly multiplexed approach for protein biomarker quantification, with MRM application detection limits in the low picogram/ml range of protein concentration in plasma83. If a selection of T. pallidum peptides could be definitively demonstrated to be present in plasma or urine, then this could be an attractive analytical approach with a strong potential for yielding the detection capabilities and precision needed for clinical applications.\n\nHowever, apart from the low abundance in plasma or urine, other factors could explain why the T. pallidum proteins were not detected in our MRM experiments:\n\n1. The LOD T. pallidum spiking experiments were performed in PBS buffer as opposed to a highly complex plasma or urine matrix background.\n\n2. Variations in gene expression and structural components of proteins could also account for the lack of T. pallidum protein detection. Fluctuations in gene expression may explain why we did not find TprG, a protein implicated in phase variation which has been shown to be expressed at varying levels during infection due to changes in the number of guanine nucleotide repeats immediately upstream of its transcriptional start site84. Heterogeneous T. pallidum protein sequence sites14,16,85 could also confound rigid MRM assay detection parameters. Such heterogeneity has been shown16 to be present in one candidate biomarker, TP_0922, although this variable site was not present in the PTPs incorporated in this MRM assay. Poor proteolytic cleavage can stem from structural features of the protein, different digestion kinetics and post-translational modifications. For example, phosphorylated residues within two amino acids of the point of cleavage can hinder proteolysis86. Little is known about the extent of T. pallidum protein post-translational modification aside from a study that demonstrated glycosylation of the Flagellar core proteins (FlaBs) as reported by antibody and glycan staining techniques87, however, the exact modification sites and extent of modification remain unknown. Other proteomics studies of L. interrogans have demonstrated likely roles for protein acetylation and methylation in virulence mechanisms88,89.\n\n3. We only tested eleven out of more than a thousand predicted proteins in the T. pallidum proteome56, a selection largely based on spectral counting16 as an estimation of protein abundance. We cannot assume, however, that this indirect manner of quantifying T. pallidum protein levels in a rabbit testicle model directly recapitulates T. pallidum protein expression levels in plasma samples of syphilis-infected patients. One of the reasons for this is that protein expression may vary according to host and disease stage. Antigen detection during latent stage disease will be especially challenging since T. pallidum has been shown to sequester itself in protected niches such as eyes, hair follicles and nerves90. Other T. pallidum proteins may be more suitable diagnostic biomarkers, given that they are reflective of the disease stages studied and that they are consistently present in the biofluids of interest. For example, Lipoprotein Tp47, which could still be identified in the most diluted T. pallidum sample (300 T. pallidum/ml) in this study, could be an interesting biomarker for future studies.\n\n4. Various technical limitations such as a possible suboptimal chromatographic gradient length, modifiable proteotypic residues and protein degradation secondary to sample processing could have impeded biomarker detection. Other studies have reported chromatographic gradient lengths of 30 minutes or longer32,33,35,38, thus implementation of longer gradients could be considered in future studies in order to improve peptide resolution. In this study, chromatographic separations were performed in triple using shorter 10-minute gradients in order to optimize the sample throughput without the loss of MS sensitivity due to overlapping transition windows. Therefore, co-eluting peptides were split over different chromatographic runs since plasma protein availability was not a limiting factor. Oxidizable proteotypic residues, namely cysteine, methionine and tryptophan, can cause artifactual modifications during processing or storage resulting multiple forms of targeted peptides. With this said, the PTP selection process also requires a necessary balance between many different parameters, whereby selection of peptides containing suboptimal amino acid residues can sometimes remain the most favourable option. Ribosomal protein TP_0250b was only represented by one PTP, which may have limited detectability, thus future assays could ideally incorporate more than one peptide per protein.\n\n5. Sample processing may have also contributed to protein degradation; therefore prompt analysis of fresh non-frozen biological specimens, if possible, is recommended. Moreover, alternative sample processing procedures, such as the use of molecular weight cut off filters to concentrate urine could improve protein detectability39.\n\n6. Lastly, only a limited amount of clinical samples were analysed, especially urine and the study was a single centre study with only MSM participants, therefore it is not generalizable. An improvement for future studies would be the incorporation of isotopically labelled (non-T. pallidum) reference standards, which have been shown to improve analytical precision, detect variations in instrument performance and aid in detecting chemical interferences91.\n\nTargeted MS approaches are only able to search for a limited amount of pre-selected biomarker candidates. A more comprehensive approach would be to take a step backwards to conduct broader shotgun proteomics in plasma and urine samples of individuals with syphilis. Shotgun approaches identifying M. tuberculosis antigens in urine have been previously successful39,40. A compelling study from Eyford et al. used a ‘deep-mining’ proteomics approach and were able to detect 254 Typanosoma brucei rhodesiense proteins in plasma from African sleeping sickness patients92. Quantitative data- independent acquisition modes of MS analysis, including SWATH-MS93, are also very promising avenues for clinical applications94,95.\n\n\nConclusions\n\nIn an effort to identify promising T. pallidum diagnostic biomarkers, we designed a scheduled MRM assay incorporating 141 MRM ions pairs correlated to 30 PTPs/ 11 T. pallidum proteins. Factors such as the extremely low (femtomoles per liter) predicted T. pallidum protein concentration in biofluids, possible variable protein expression according to host/disease stage and potential presence of protein post-translational modifications likely contributed to the lack of signal detection for all candidate biomarkers investigated. Since the proteins targeted in this study were likely buried in the proverbial haystack of plasma proteins, alternative sample preparation and analysis strategies are warranted. With the rapidly progressing innovations of MS applications and technology, we believe clinical proteomics is far from its pinnacle of potential.\n\n\nData availability\n\nThe datasets supporting the conclusions of this article are available in the PeptideAtlas53 repository, with the identifier PASS00978, in addition to being provided within the article and its supplementary files.\n\n\nConsent and ethics approval\n\nThe prospective observational cohort study (SeTPAT ClinicalTrials.gov # NCT02059525) that provided the clinical samples used in this study was approved by the Institutional Review Board of the Institute of Tropical Medicine Antwerp and the Ethics Committee of the University of Antwerp (13/44/426), Belgium. Written informed consent for publication of the participants’ anonymized details was obtained from the participants. The T. pallidum ssp. pallidum DAL-1 strain used in this study was propagated in rabbits at the Veterinary Research Institute in Brno, Czech Republic. The handling of animals in the study was performed in accordance with the current Czech legislation (Animal Protection and Welfare Act No. 246/1992 Coll. of the Government of the Czech Republic). These specific experiments were approved by the Ethics Committee of the Veterinary Research Institute (Permit Number 20– 2014).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by a grant from the Flanders Research Foundation, SOFI-B Grant (#757003) to CRK, http://www.fwo.be/.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nSpecial thanks to the study participants, in addition to David Šmajs and Michal Strouhal (Masaryk University, Brno, Czech Republic) for providing the purified T. pallidum DAL-1 strain used in this study.\n\n\nSupplementary materials\n\nSupplementary File 1. Table listing of optimized MRM parameters for 30 peptides targeting 11 T. pallidum proteins. (Supplementary File 1.xlsx).\n\nClick here to access the data.\n\nSupplementary File 2. Examples of intensity plots.\n\nClick here to access the data.\n\nSupplementary File 3. Table listing of protein and peptide reports for the LOD experiments using purified T. pallidum dilution series and ESI-LTQ-Orbitrap MS/MS analysis. (Supplementary File 3.xlsx).\n\nClick here to access the data.\n\nClick here to access the data.\n\nSupplementary File 4. Calculations to estimate concentration of T. pallidum proteins corresponding to proteotypic peptides (PTPs) in human syphilis infections. (Supplementary file 4.xlsx).\n\nClick here to access the data.\n\n\nReferences\n\nNewman L, Rowley J, Vander Hoorn S, et al.: Global Estimates of the Prevalence and Incidence of Four Curable Sexually Transmitted Infections in 2012 Based on Systematic Review and Global Reporting. PLoS One. 2015; 10(12): e0143304. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan de Laar M, Spiteri G: Increasing trends of gonorrhoea and syphilis and the threat of drug-resistant gonorrhoea in Europe. Euro Surveill. 2012; 17(29): pii: 20225. PubMed Abstract\n\nPeterman TA, Su J, Bernstein KT, et al.: Syphilis in the United States: on the rise? Expert Rev Anti Infect Ther. Informa UK, Ltd; 2015; 13(2): 161–8. PubMed Abstract | Publisher Full Text\n\nKenyon C, Lynen L, Florence E, et al.: Syphilis reinfections pose problems for syphilis diagnosis in Antwerp, Belgium - 1992 to 2012. Euro Surveill. 2014; 19(45): 20958. PubMed Abstract | Publisher Full Text\n\nOgilvie GS, Taylor DL, Moniruzzaman A, et al.: A population-based study of infectious syphilis rediagnosis in British Columbia, 1995-2005. Clin Infect Dis. 2009; 48(11): 1554–8. PubMed Abstract | Publisher Full Text\n\nSeña AC, White BL, Sparling PF: Novel Treponema pallidum serologic tests: a paradigm shift in syphilis screening for the 21st century. Clin Infect Dis. Oxford University Press; 2010; 51(6): 700–8. PubMed Abstract | Publisher Full Text\n\nBelisle JT, Brandt ME, Radolf JD, et al.: Fatty acids of Treponema pallidum and Borrelia burgdorferi lipoproteins. J Bacteriol. 1994; 176(8): 2151–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJoyanes P, Borobio M V, Arquez JM, et al.: The association of false-positive rapid plasma reagin results and HIV infection. Sex Transm Dis. 1998; 25(10): 569–71. PubMed Abstract | Publisher Full Text\n\nMonath TP, Frey SE: Possible autoimmune reactions following smallpox vaccination: the biologic false positive test for syphilis. Vaccine. 2009; 27(10): 1645–50. PubMed Abstract | Publisher Full Text\n\nSeña AC, Wolff M, Martin DH, et al.: Predictors of serological cure and Serofast State after treatment in HIV-negative persons with early syphilis. Clin Infect Dis. Oxford University Press; 2011; 53(11): 1092–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGayet-Ageron A, Lautenschlager S, Ninet B, et al.: Sensitivity, specificity and likelihood ratios of PCR in the diagnosis of syphilis: a systematic review and meta-analysis. Sex Transm Infect. 2013; 89(3): 251–6. PubMed Abstract | Publisher Full Text\n\nTipple C, Hanna MO, Hill S, et al.: Getting the measure of syphilis: qPCR to better understand early infection. Sex Transm Infect. 2011; 87(6): 479–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCastro R, Prieto E, Aguas MJ, et al.: Detection of Treponema pallidum sp pallidum DNA in latent syphilis. Int J STD AIDS. 2007; 18(12): 842–5. PubMed Abstract | Publisher Full Text\n\nPětrošová H, Pospíšilová P, Strouhal M, et al.: Resequencing of Treponema pallidum ssp. pallidum Strains Nichols and SS14: correction of sequencing errors resulted in increased separation of syphilis treponeme subclusters. PLoS One. 2013; 8(9): e74319. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcGill MA, Edmondson DG, Carroll JA, et al.: Characterization and serologic analysis of the Treponema pallidum proteome. Infect Immun. 2010; 78(6): 2631–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOsbak KK, Houston S, Lithgow KV, et al.: Characterizing the Syphilis-Causing Treponema pallidum ssp. pallidum Proteome Using Complementary Mass Spectrometry. PLoS Negl Trop Dis. 2016; 10(9): e0004988. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmajs D, McKevitt M, Howell JK, et al.: Transcriptome of Treponema pallidum: gene expression profile during experimental rabbit infection. J Bacteriol. American Society for Microbiology; 2005; 187(5): 1866–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSalazar JC, Rathi A, Michael NL, et al.: Assessment of the kinetics of Treponema pallidum dissemination into blood and tissues in experimental syphilis by real-time quantitative PCR. Infect Immun. American Society for Microbiology (ASM); 2007; 75(6): 2954–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVaira D, Malfertheiner P, Mégraud F, et al.: Diagnosis of Helicobacter pylori infection with a new non-invasive antigen-based assay. HpSA European study group. Lancet. 1999; 354(9172): 30–3. PubMed Abstract | Publisher Full Text\n\nJarvis JN, Percival A, Bauman S: Evaluation of a novel point-of-care cryptococcal antigen test on serum, plasma, and urine from patients with HIV-associated cryptococcal meningitis. Clin Infect Dis. 2011; 53(10): 1019–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nParisi MT, Tierno PM Jr: Evaluation of new rapid commercial enzyme immunoassay for detection of Cryptosporidium oocysts in untreated stool specimens. J Clin Microbiol. 1995; 33(7): 1963–5. PubMed Abstract | Free Full Text\n\nHaque R, Ali IK, Akther S, et al.: Comparison of PCR, isoenzyme analysis, and antigen detection for diagnosis of Entamoeba histolytica infection. J Clin Microbiol. 1998; 36(2): 449–52. PubMed Abstract | Free Full Text\n\nCross RW, Boisen ML, Millett MM, et al.: Analytical Validation of the ReEBOV Antigen Rapid Test for Point-of-Care Diagnosis of Ebola Virus Infection. J Infect Dis. 2016; 214(suppl 3): S210–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlores LL, Steingart KR, Dendukuri N, et al.: Systematic review and meta-analysis of antigen detection tests for the diagnosis of tuberculosis. Clin Vaccine Immunol. 2011; 18(10): 1616–27. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTramont E, Mandell GL, Bennett JE, et al.: Princ. Pract. Infect. Dis. 8th ed. Churchill Livingstone Inc. 2015.\n\nSabbagh B, Mindt S, Neumaier M, et al.: Clinical applications of MS-based protein quantification. Proteomics Clin Appl. 2016; 10(4): 323–45. PubMed Abstract | Publisher Full Text\n\nCheng K, Chui H, Domish L, et al.: Recent development of mass spectrometry and proteomics applications in identification and typing of bacteria. Proteomics Clin Appl. 2016; 10(4): 346–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGerber SA, Rush J, Stemman O, et al.: Absolute quantification of proteins and phosphoproteins from cell lysates by tandem MS. Proc Natl Acad Sci U S A. National Academy of Sciences; 2003; 100(12): 6940–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKettenbach AN, Rush J, Gerber SA: Absolute quantification of protein and post-translational modification abundance with stable isotope-labeled synthetic peptides. Nat Protoc. Nature Research; 2011; 6(2): 175–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMermelekas G, Vlahou A, Zoidakis J: SRM/MRM targeted proteomics as a tool for biomarker validation and absolute quantification in human urine. Expert Rev Mol Diagn. 2015; 15(11): 1441–54. PubMed Abstract | Publisher Full Text\n\nPernemalm M, Lehtiö J: Mass spectrometry-based plasma proteomics: state of the art and future outlook. Expert Rev Proteomics. 2014; 11(4): 431–48. PubMed Abstract | Publisher Full Text\n\nDomanski D, Percy AJ, Yang J, et al.: MRM-based multiplexed quantitation of 67 putative cardiovascular disease biomarkers in human plasma. Proteomics. 2012; 12(8): 1222–43. PubMed Abstract | Publisher Full Text\n\nPercy AJ, Chambers AG, Yang J, et al.: Multiplexed MRM-based quantitation of candidate cancer biomarker proteins in undepleted and non-enriched human plasma. Proteomics. 2013; 13(14): 2202–15. PubMed Abstract | Publisher Full Text\n\nKuzyk MA, Smith D, Yang J, et al.: Multiple reaction monitoring-based, multiplexed, absolute quantitation of 45 proteins in human plasma. Mol Cell Proteomics. 2009; 8(8): 1860–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPercy AJ, Yang J, Hardie DB, et al.: Precise quantitation of 136 urinary proteins by LC/MRM-MS using stable isotope labeled peptides as internal standards for biomarker discovery and/or verification studies. Methods. Elsevier Inc.; 2015; 81: 24–33. PubMed Abstract | Publisher Full Text\n\nAbeijon C, Kashino SS, Silva FO, et al.: Identification and diagnostic utility of Leishmania infantum proteins found in urine samples from patients with visceral leishmaniasis. Clin Vaccine Immunol. 2012; 19(6): 935–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbeijon C, Campos-Neto A: Potential non-invasive urine-based antigen (protein) detection assay to diagnose active visceral leishmaniasis. PLoS Negl Trop Dis. 2013; 7(5): e2161. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKruh-Garcia NA, Wolfe LM, Chaisson LH, et al.: Detection of Mycobacterium tuberculosis peptides in the exosomes of patients with active and latent M. tuberculosis infection using MRM-MS. Koomen JM, editor. PLoS One. 2014; 9(7): e103811. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYoung BL, Mlamla Z, Gqamana PP, et al.: The identification of tuberculosis biomarkers in human urine samples. Eur Respir J. 2014; 43(6): 1719–29. PubMed Abstract | Publisher Full Text\n\nKashino SS, Pollock N, Napolitano DR, et al.: Identification and characterization of Mycobacterium tuberculosis antigens in urine of patients with active pulmonary tuberculosis: an innovative and alternative approach of antigen discovery of useful microbial molecules. Clin Exp Immunol. Wiley-Blackwell; 2008; 153(1): 56–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim SH, Lee NE, Lee JS, et al.: Identification of Mycobacterial Antigens in Human Urine by Use of Immunoglobulin G Isolated from Sera of Patients with Active Pulmonary Tuberculosis. Land GA, editor. J Clin Microbiol. 2016; 54(6): 1631–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWorkowski KA, Berman S; Centers for Disease Control and Prevention (CDC): Sexually transmitted diseases treatment guidelines, 2010. MMWR Recomm Rep. 2010; 59(RR–12): 1–110. PubMed Abstract\n\nFrench P, Gomberg M, Janier M, et al.: IUSTI: 2008 European Guidelines on the Management of Syphilis. Int J STD AIDS. 2009; 20(5): 300–9. PubMed Abstract | Publisher Full Text\n\nLiu H, Rodes B, Chen CY, et al.: New tests for syphilis: rational design of a PCR method for detection of Treponema pallidum in clinical specimens using unique regions of the DNA polymerase I gene. J Clin Microbiol. 2001; 39(5): 1941–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlasarová M, Pospíšilová P, Mikalová L, et al.: Sequencing-based molecular typing of Treponema pallidum strains in the Czech Republic: all identified genotypes are related to the sequence of the SS14 strain. Acta Derm Venereol. 2012; 92(6): 669–74. PubMed Abstract | Publisher Full Text\n\nTipple C, Jones R, McClure M, et al.: Rapid Treponema pallidum clearance from blood and ulcer samples following single dose benzathine penicillin treatment of early syphilis. Vinetz JM, editor. PLoS Negl Trop Dis. 2015; 9(2): e0003492. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRai AJ, Gelfand CA, Haywood BC, et al.: HUPO Plasma Proteome Project specimen collection and handling: towards the standardization of parameters for plasma proteome samples. Proteomics. 2005; 5(13): 3262–77. PubMed Abstract | Publisher Full Text\n\nAguilar-Mahecha A, Kuzyk MA, Domanski D, et al.: The effect of pre-analytical variability on the measurement of MRM-MS-based mid- to high-abundance plasma protein biomarkers and a panel of cytokines. Krauss-Etschmann S, editor. PLoS One. 2012; 7(6):e38290. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchuchard MD, Mehigh RJ, Cockrill SL, et al.: Artifactual isoform profile modification following treatment of human plasma or serum with protease inhibitor, monitored by 2-dimensional electrophoresis and mass spectrometry. Biotechniques. 2005; 39(2): 239–47. PubMed Abstract\n\nHuman Proteome Organizaton: Human Kidney and Urine Proteome Project. Standard Protocol for Urine Collection and Storage. Reference Source\n\nFusaro VA, Mani DR, Mesirov JP, et al.: Prediction of high-responding peptides for targeted protein assays by mass spectrometry. Nat Biotechnol. 2009; 27(2): 190–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltschul SF, Gish W, Miller W, et al.: Basic local alignment search tool. J Mol Biol. 1990; 215(3): 403–10. PubMed Abstract | Publisher Full Text\n\nDesiere F, Deutsch EW, King NL, et al.: The PeptideAtlas project. Nucleic Acids Res. Oxford University Press; 2006; 34(Database issue): D655–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLukehart SA, Marra CM: Isolation and laboratory maintenance of Treponema pallidum. Curr Protoc Microbiol. 2007; Chapter 12: Unit 12A.1. PubMed Abstract | Publisher Full Text\n\nHanff PA, Norris SJ, Lovett MA, et al.: Purification of Treponema pallidum, Nichols strain, by Percoll density gradient centrifugation. Sex Transm Dis. 1984; 11(4): 275–86. PubMed Abstract | Publisher Full Text\n\nFraser CM, Norris SJ, Weinstock GM, et al.: Complete genome sequence of Treponema pallidum, the syphilis spirochete. Science. American Association for the Advancement of Science. 1998; 281(5375): 375–88. PubMed Abstract | Publisher Full Text\n\nKeller A, Nesvizhskii AI, Kolker E, et al.: Empirical statistical model to estimate the accuracy of peptide identifications made by MS/MS and database search. Anal Chem. 2002; 74(20): 5383–92. PubMed Abstract | Publisher Full Text\n\nDesrosiers DC, Anand A, Luthra A, et al.: TP0326, a Treponema pallidum β-barrel assembly machinery A (BamA) orthologue and rare outer membrane protein. Mol Microbiol. 2011; 80(6): 1496–515. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCameron CE, Lukehart SA, Castro C, et al.: Opsonic potential, protective capacity, and sequence conservation of the Treponema pallidum subspecies pallidum Tp92. J Infect Dis. 2000; 181(4): 1401–13. PubMed Abstract | Publisher Full Text\n\nLuthra A, Anand A, Hawley KL, et al.: A Homology Model Reveals Novel Structural Features and an Immunodominant Surface Loop/Opsonic Target in the Treponema pallidum BamA Ortholog TP_0326. J Bacteriol. 2015; 197(11): 1906–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGayet-Ageron A, Laurent F, Schrenzel J, et al.: Performance of the 47-kilodalton membrane protein versus DNA polymerase I genes for detection of Treponema pallidum by PCR in ulcers. J Clin Microbiol. 2015; 53(3): 976–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCenturion-Lara A, Giacani L, Godornes C, et al.: Fine analysis of genetic diversity of the tpr gene family among treponemal species, subspecies and strains. PLoS Negl Trop Dis. Public Library of Science; 2013; 7(5): e2222. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhou L, Gong R, Lu X, et al.: Development of a Multiplex Real-Time PCR Assay for the Detection of Treponema pallidum, HCV, HIV-1, and HBV. Jpn J Infect Dis. 2015; 68(6): 481–7. PubMed Abstract | Publisher Full Text\n\nCruz AR, Pillay A, Zuluaga AV, et al.: Secondary syphilis in cali, Colombia: new concepts in disease pathogenesis. PLoS Negl Trop Dis. Lukehart S, editor. Public Library of Science; 2010; 4(5): e690. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPinto M, Antelo M, Ferreira R, et al.: A retrospective cross-sectional quantitative molecular approach in biological samples from patients with syphilis. Microb Pathog. 2017; 104: 296–302. PubMed Abstract | Publisher Full Text\n\nLawn SD: Point-of-care detection of lipoarabinomannan (LAM) in urine for diagnosis of HIV-associated tuberculosis: a state of the art review. BMC Infect Dis. BioMed Central Ltd; 2012; 12: 103. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu C, Duan J, Liu T, et al.: Contributions of immunoaffinity chromatography to deep proteome profiling of human biofluids. J Chromatogr B Analyt Technol Biomed Life Sci. 2016; 1021: 57–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPercy AJ, Simon R, Chambers AG, et al.: Enhanced sensitivity and multiplexing with 2D LC/MRM-MS and labeled standards for deeper and more comprehensive protein quantitation. J Proteomics. 2014; 106: 113–24. PubMed Abstract | Publisher Full Text\n\nShi T, Su D, Liu T, et al.: Advancing the sensitivity of selected reaction monitoring-based targeted quantitative proteomics. Proteomics. 2012; 12(8): 1074–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKeshishian H, Addona T, Burgess M, et al.: Quantitative, multiplexed assays for low abundance proteins in plasma by targeted mass spectrometry and stable isotope dilution. Mol Cell Proteomics. 2007; 6(12): 2212–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYadav AK, Bhardwaj G, Basak T, et al.: A systematic analysis of eluted fraction of plasma post immunoaffinity depletion: implications in biomarker discovery. PLoS One. Public Library of Science; 2011; 6(9): e24442. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCameron CE: Identification of a Treponema pallidum laminin-binding protein. Infect Immun. American Society for Microbiology (ASM); 2003; 71(5): 2525–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCameron CE, Brown EL, Kuroiwa JM, et al.: Treponema pallidum fibronectin-binding proteins. J Bacteriol. 2004; 186(20): 7019–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrinkman MB, McGill MA, Pettersson J, et al.: A novel Treponema pallidum antigen, TP0136, is an outer membrane protein that binds human fibronectin. Infect Immun. American Society for Microbiology; 2008; 76(5): 1848–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPenn CW, Cockayne A, Bailey MJ: The outer membrane of Treponema pallidum: biological significance and biochemical properties. J Gen Microbiol. Microbiology Society; 1985; 131(9): 2349–57. PubMed Abstract | Publisher Full Text\n\nKrastins B, Prakash A, Sarracino DA, et al.: Rapid development of sensitive, high-throughput, quantitative and highly selective mass spectrometric targeted immunoassays for clinically important proteins in human plasma and serum. Clin Biochem. 2013; 46(6): 399–410. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNelson RW, Krone JR, Bieber AL, et al.: Mass Spectrometric Immunoassay. Anal Chem. American Chemical Society; 1995; 67(7): 1153–8. PubMed Abstract | Publisher Full Text\n\nMadian AG, Rochelle NS, Regnier FE: Mass-linked immuno-selective assays in targeted proteomics. Anal Chem. American Chemical Society; 2013; 85(2): 737–48. PubMed Abstract | Publisher Full Text\n\nBrinkman MB, Mckevitt M, McLoughlin M, et al.: Reactivity of antibodies from syphilis patients to a protein array representing the Treponema pallidum proteome. J Clin Microbiol. American Society for Microbiology; 2006; 44(3): 888–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSalazar JC, Hazlett KR, Radolf JD: The immune response to infection with Treponema pallidum, the stealth pathogen. Microbes Infect. 2002; 4(11): 1133–40. PubMed Abstract | Publisher Full Text\n\nLafond RE, Lukehart SA: Biological basis for syphilis. Clin Microbiol Rev. 2006; 19(1): 29–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnderson NL, Anderson NG, Haines LR, et al.: Mass spectrometric quantitation of peptides and proteins using Stable Isotope Standards and Capture by Anti-Peptide Antibodies (SISCAPA). J Proteome Res. 2004; 3(2): 235–44. PubMed Abstract | Publisher Full Text\n\nWhiteaker JR, Zhao L, Anderson L, et al.: An automated and multiplexed method for high throughput peptide immunoaffinity enrichment and multiple reaction monitoring mass spectrometry-based quantification of protein biomarkers. Mol Cell Proteomics. American Society for Biochemistry and Molecular Biology; 2010; 9(1): 184–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiacani L, Lukehart S, Centurion-Lara A: Length of guanosine homopolymeric repeats modulates promoter activity of subfamily II tpr genes of Treponema pallidum ssp. pallidum. FEMS Immunol Med Microbiol. The Oxford University Press; 2007; 51(2): 289–301. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiacani L, Molini BJ, Kim EY, et al.: Antigenic variation in Treponema pallidum: TprK sequence diversity accumulates in response to immune pressure during experimental syphilis. J Immunol. American Association of Immunologists; 2010; 184(7): 3822–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMolina H, Horn DM, Tang N, et al.: Global proteomic profiling of phosphopeptides using electron transfer dissociation tandem mass spectrometry. Proc Natl Acad Sci U S A. 2007; 104(7): 2199–204. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWyss C: Flagellins, but not endoflagellar sheath proteins, of Treponema pallidum and of pathogen-related oral spirochetes are glycosylated. Infect Immun. 1998; 66(12): 5751–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEshghi A, Pinne M, Haake DA, et al.: Methylation and in vivo expression of the surfaceexposed Leptospira interrogans outer-membrane protein OmpL32. Microbiology. 2012; 158(Pt 3): 622–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWitchell TD, Eshghi A, Nally JE, et al.: Post-translational modification of LipL32 during Leptospira interrogans infection. Small PLC, editor. PLoS Negl Trop Dis. 2014; 8(10): e3280. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSell S, Salman J, Norris SJ: Reinfection of chancre-immune rabbits with Treponema pallidum. I. Light and immunofluorescence studies. Am J Pathol. 1985; 118(2): 248–55. PubMed Abstract | Free Full Text\n\nPercy AJ, Chambers AG, Smith DS, et al.: Standardized protocols for quality control of MRM-based plasma proteomic workflows. J Proteome Res. American Chemical Society; 2013; 12(1): 222–33. PubMed Abstract | Publisher Full Text\n\nEyford BA, Ahmad R, Enyaru JC, et al.: Identification of Trypanosome proteins in Plasma from African sleeping sickness patients infected with T. b. rhodesiense. PLoS One. 2013; 8(8): e71463. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnjo SI, Santa C, Manadas B: SWATH-MS as a tool for biomarker discovery: From basic research to clinical applications. Proteomics. 2017; 17(3–4): 1600278. PubMed Abstract | Publisher Full Text\n\nNigjeh EN, Chen R, Brand RE, et al.: Quantitative Proteomics Based on Optimized Data-Independent Acquisition in Plasma Analysis. J Proteome Res. 2017; 16(2): 665–76. PubMed Abstract | Publisher Full Text\n\nGillet LC, Navarro P, Tate S, et al.: Targeted data extraction of the MS/MS spectra generated by data-independent acquisition: a new concept for consistent and accurate proteome analysis. Mol Cell Proteomics. American Society for Biochemistry and Molecular Biology; 2012; 11(6): O111.016717. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "35906",
"date": "27 Jul 2018",
"name": "Mohd M. Khan",
"expertise": [
"Reviewer Expertise Mass Spectrometry based Proteomics",
"Secretomics",
"and Phosphoproteomics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article “Needle lost in the haystack: multiple reaction monitoring fails to detect Treponema pallidum candidate protein biomarkers in plasma and urine samples from individuals with syphilis” focuses on developing targeted proteomics based assay to detect and validate potential biomarkers of Treponema pallidum infection. The manuscript is well written, the rationale behind the work is justified, and the methods section is detailed.\n\nI would suggest few points as following:\n\nPlease add the details about participant consent and study approval by the Institutional Review Board (IRB). Why only 4 urine samples were analyzed? Which protein marker(s) supposedly should be detected in urine? And which one(s) in plasma? Please consider adding the details and rationale. Authors have used microflow for the experiments on quadrupole, which has lower sensitivity. Perhaps targeted experiments using a nanoflow setup, as was done for experiments using orbitrap, will get better sensitivity. The synthetic peptides were added after SPE clean-up. Why weren't they added before SPE to determine losses? Please supply more information on how LOD was calculated; it was based on the dilution series? The readers can use the explanation of calculations, if provided. The spiking experiments should have been done in real matrix?\n\nPlease comment more on PTP selection? Are they unique? What about labile residues? Was anything done to look at the methionine oxidation? Deamindations? etc.\n\nThe synthetic peptides were 95% pure. Were they quantified? (AAA?)\n\nThe paper makes an interesting list of its shortcomings in the discussion, which is helpful. A lot of the critique is already self-proclaimed.The overall conclusion of manuscript is: \"A lot of effort and fine-tuning of sample prep/method development will be needed for biomarker discovery and validation.\" Biomarker validation is time-consuming and challenging, perhaps some orthogonal experiments should have been done (such as western blotting) to be able to know if the global data acquired using spectral-count was good enough before moving on to the MRM experiments. Nonetheless, authors have done a great job in discussing the shortcomings and in writing the paper.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "4032",
"date": "09 Oct 2018",
"name": "Kara Osbak",
"role": "Author Response",
"response": "Please add the details about participant consent and study approval by the Institutional Review Board (IRB).The following information is available under the section “Consent and Ethics Approval”: “The prospective observational cohort study (SeTPAT ClinicalTrials.gov # NCT02059525) that provided the clinical samples used in this study was approved by the Institutional Review Board of the Institute of Tropical Medicine Antwerp and the Ethics Committee of the University of Antwerp (13/44/426), Belgium. Written informed consent for publication of the participants’ anonymized details was obtained from the participants.”Why only 4 urine samples were analyzed? which protein marker(s) supposedly should be detected in urine? and which one(s) in plasma? please consider adding the details and rationale.As this was an exploratory biomarker study we did not stratify our biomarker selection by biofluid type, thus the potential biomarkers mentioned in this study were theoretically applicable to urine and blood. No previous studies of this type have been performed, hence our selection, as described, was based on our shotgun proteomics studies of Trepoenema pallidum during rabbit infection, literature inferences on previous microarray studies and physiochemical characteristics that would be amendable to MRM detection.Admittedly, only analyzing four urine samples is a small number. It became apparent that our experimental strategy was not working after analyzing the initial set of samples that we decided not to go further. Despite this small number, we believe this information might be useful for other groups considering employing similar methods.Authors have used microflow for the experiments on quadrupole, which has lower sensitivity. Perhaps targeted experiments using a nanoflow setup, as was done for experiments using orbitrap, will get better sensitivity. Indeed, one would expect higher sensitivity with the nanoflow set-up, however, to analyze larger volumes of patient material, which might also increase the sensitivity, the microflow set-up is more advantageous. Moreover, targeted microflow LC-MS/MS experiments offer the benefit of increased throughput (the initial goal was to develop method to analyze larger sample cohorts in a short time) and robustness.The synthetic peptides were added after SPE clean-up. Why weren't they added before SPE to determine losses?In this exploratory study, multiple candidate biomarkers were included in the targeted setup to evaluate their potency. Therefore, it was not yet clear in which final concentration these synthetic peptides should be spiked into the samples. At this stage of the study, the synthetic peptides were also not exactly quantified (i.e. AQUA Basic peptides), which would make the determination of losses during sample preparation not precise.Please supply more information on how LOD was calculated; it was based on the dilution series? The readers can use the explanation of calculations, if provided. The LOD calculations were based on the dilution series of T. pallidum in PBS, which were either enriched or unenriched with magnetic beads coupled with polyclonal antibodies directed against T. pallidum. These were then subjected to LTQ Orbitrap analyses. Two unique T. pallidum proteins, Cytoplasmic filament protein A (TP_0748) and Lipoprotein antigen Tp47 (TP_0547), were found in the 300 bacteria/ml fraction in the enriched and unenriched samples, respectively. Therefore, the LOD based on a high-resolution LTQ-Orbitrap instrument was approximately 300 bacteria/ml PBS for both the antibody enriched and unenriched samples, meaning there was no significant improvement in LOD using bead enrichment. These results are detailed in Supplementary File 3. Furthermore, rough concentration calculations based on previous studies were presented in Supplementary Table 4 which estimated that the concentration of T. pallidum/ target PTPs in human serum would be in the femtomoles per liter range during human T. pallidum infection (calculations presented in Supplementary File 4).The spiking experiments should have been done in real matrix?Indeed, the final dilution series of the labeled synthetic peptides, that would be used to determine the absolute concentration of the candidate biomarkers, would have been done in real matrix. However, at this point of the study the goal was to evaluate the abundance of the selected proteotypic peptides before using absolutely quantified labeled peptides (e.g. AQUA Ultimate). Therefore, it was decided to tune and optimize the LC-MS/MS parameters of each labeled peptide without any matrix to determine the most optimal instrument settings.Please comment more on PTP selection? Are they unique? What about labile residues? Was anything done to look at the methionine oxidation? deamindations? etc Due to the lack of available MS datasets about the Treponema pallidum proteome (no library available), proteotypic peptides of each candidate protein biomarker were predicted in silico. As described, ESP predictor (Fusaro et al, 2009 Nat. Biotechnology) was used to find the most suitable proteotypic peptides based on 550 physico-chemical parameters including potential modifications (e.g. oxidation of methionine, deamidation, phosphorylation etc.). Best scoring peptides were selected for each of the proteins. Moreover, the PTPs that were selected were subjected to BLAST analyses to confirm their uniqueness.The synthetic peptides were 95% pure. Were they quantified? (AAA?)During this exploratory study AQUA Basic peptides (Thermo Fisher Scientific) were used to evaluate the abundance of the selected proteotypic peptides. Although the quantity of the PTPs were specified in the leaflet, the peptides were purchased in a lyophilized formulation as one aliquot. Therefore, they are not suited as reference for absolute quantification. In a next step, AQUA Ultimate peptides (with a high concentration precision) would have been used to determine the absolute abundance of the protein biomarkers."
}
]
},
{
"id": "35202",
"date": "22 Aug 2018",
"name": "Timothy Palzkill",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper by Van Raemdonck describes the use of mass spectroscopy to identify T. pallidum proteins from plasma and urine from infected patients. If successful, such a method would be very useful in syphilis diagnostics, particularly with regard to reinfection. Thus, the work addresses a significant problem. However, although they could detect isotopically labeled peptides spiked into the samples, they could not detect T. pallidum proteins from the infecting organisms. Limit of detection experiments suggest this is due to the very low concentrations of T. pallidum proteins in plasma and serum samples. Thus, the main goal produced a negative result. However, there is considerable useful information in this study. The limit of detection experiments with T. pallidum bacteria that have been diluted and were unenriched or enriched with antibody beads provided interesting results on which proteins could be detected and how many bacteria per ml were needed for detection. In addition, the MRM experiments appear to be carefully designed and provide important limit of detection information for future studies. The discussion provides a useful assessment of limiting factors in the direct detection of T. pallidum antigen proteins.\n\nComments: 1. Some of the description of LOD experiments with dilutions of T. pallidum on page 7, right column, paragraph 2, is difficult to follow. The authors state “In total, eight unique T. pallidum proteins were found in the unenriched and enriched retentate…” but they do not state a dilution. Later in the paragraph, it states “Ten unique T. pallidum proteins were found in the highest concentration….” . This is confusing. What condition does the eight unique proteins refer to?\n\n2. Page 7, right column, paragraph 2. Tp47 is discussed twice in the paragraph with different gene names each time, ie, Tp47 (TP_0547) and Tp47 (TP_0574).\n\n3. Page 3, Introduction, paragraph 1, line 1. “T. pallidum, a non-culturable…” Suggest updating based on recent publication in mBio on culturing T. pallidum.\n\n4. Figure 3. Please indicate the meaning of the asterisks on TP_0249, etc.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "4031",
"date": "09 Oct 2018",
"name": "Kara Osbak",
"role": "Author Response",
"response": "1. Some of the description of LOD experiments with dilutions of T. pallidum on page 7, right column, paragraph 2, is difficult to follow. The authors state “In total, eight unique T. pallidum proteins were found in the unenriched and enriched retentate…” but they do not state a dilution. Later in the paragraph, it states “Ten unique T. pallidum proteins were found in the highest concentration….” . This is confusing. What condition does the eight unique proteins refer to? This ambiguous sentence has now been reworded- “In total, eight unique T. pallidum proteins were found in both the unenriched and enriched retentate dilution series in one or more of the concentrations analyzed”. This refers to the proteins that were commonly found in both experiments, regardless of concentration. The details of which are provided in Supplementary Table 3. The other sentence was also reworded for clarify “Ten unique T. pallidum proteins were found in the highest concentration (10 4 bacteria/ml) four in the enriched retentate sample (N = 10) and non-enriched sample (N = 10), two proteins detected were unique to either the enriched or unenriched samples (Figure 3).” 2. Page 7, right column, paragraph 2. Tp47 is discussed twice in the paragraph with different gene names each time, ie, Tp47 (TP_0547) and Tp47 (TP_0574). Thanks for pointing this out, this has been rectified to the actual ORF (TP_0574). 3. Page 3, Introduction, paragraph 1, line 1. “T. pallidum, a non-culturable…” Suggest updating based on recent publication in mBio on culturing T. pallidum. Good idea, this reference has now been added. 4. Figure 3. Please indicate the meaning of the asterisks on TP_0249, etc. The asterisks refer to “*-proteins selected as candidate biomarkers in this study.”. This information has now been added"
}
]
}
] | 1
|
https://f1000research.com/articles/7-336
|
https://f1000research.com/articles/7-1616/v1
|
08 Oct 18
|
{
"type": "Case Report",
"title": "Case Report: Immediate pain relief after partial pulpotomy of cariously exposed young permanent molar using mineral trioxide aggregate and root maturation, with two years follow-up",
"authors": [
"Passant Nagi",
"Nevine Waly",
"Adel Elbardissy",
"Mohammed Khalifa",
"Nevine Waly",
"Adel Elbardissy",
"Mohammed Khalifa"
],
"abstract": "Carious exposure of immature first permanent molar is a widespread issue faced in paediatric dentistry. This may be the result of the early eruption of this molar, so parents may think it is replicable to the rest of the deciduous teeth. Preserving pulp vitality is the primary goal in treating those teeth to allow maturation of roots both in length and width. Mineral trioxide aggregate (MTA) is considered a perfect dressing material for pulpotomy (both partial and complete) due to its bio computability and sealing property. We present a case that describes treatment and two years follow-up of a symptomatic immature first permanent molar with a deep carious lesion. For treatment, we started with anaesthesia and rubber dam isolation. After that, the carious lesion was removed, and we performed partial pulpotomy, then applied MTA-Angelos on the fresh wound. Moistened cotton then was lightly packed over MTA for 15 minutes to allow initial setting, followed by application of glass ionomer and final restoration with composite. The following day, the tooth was asymptomatic with the patient reporting pain relief. After three months follow-up, the tooth normally responds to thermal test. After 12 months, a periapical radiograph of the tooth showed root maturation, and after 24 months also, the tooth was clinically and radiographically successful. MTA partial pulpotomy should be considered in the treatment of symptomatic young permanent teeth.",
"keywords": [
"MTA-Angelus",
"mineral trioxide aggregate",
"partial pulpotomy",
"vital young permanent",
"apexogenesis",
"deep caries."
],
"content": "Introduction\n\nSymptomatic young permanent first molar is a widespread event, since this is the first permanent tooth erupting in the oral cavity and parents may consider it replaceable as the rest of the child's baby teeth. The primary goal for treating those teeth is to maintain healthy pulp to allow the root to continue maturation both in length and width.\n\nPartial pulpotomy is considered as promising modality for treatment of immature permanent teeth with carious pulp exposure. This technique consists of excavation of 2–3 mm inflamed coronal pulp tissue, and the remaining pulp is then capped with dressing material that maintains its viability and promotes healing. When comparing partial with cervical pulpotomy, partial pulpotomy preserves the cell-rich coronal pulp tissue, which is necessary for healing and the formation of dentin bridge in the coronal area. Cervical pulpotomy, on the contrary, removes all the coronal pulp, with an increased risk of cervical fracture due to the loss of physiologic dentin apposition1.\n\nIn a previous study, partial pulpotomy gave a high clinical success rate (91–93%) in asymptomatic young permanent molars with deep caries1–3. However, some case reports reveal that partial pulpotomy may have good prognosis also in symptomatic teeth4. In addition, a randomised clinical trial reported treating molars with irreversible pulpitis using partial pulpotomy, and the results were promising5.\n\nChoice of capping materials or medicaments can have a massive influence on vital pulp therapy success. Mineral trioxide aggregate (MTA) is considered the gold standard of pulp dressing material. MTA provides a long-term seal, acceptable biocompatibility, and dentinal bridge formation6. Roberts et al. review in 2008 showed that MTA has excellent potential as a pulpotomy medicament, and can form hydroxyapatite when exposed to physiologic solutions7.\n\nThis case report presents the treatment of pulpitis in young permanent molar using MTA-Angelos partial pulpotomy.\n\n\nCase report\n\nAt our Paediatric Dentistry Clinic, Faculty of Dentistry, Cairo University, Egypt, an 8 year-old boy presented with acute provoked pain in the lower right posterior area that lingered after removal of stimulus, and the parent reported the child taking painkillers. No other medical or psychological problems that would affect the dental treatment were found.\n\nClinical and radiographic examinations showed caries in the lower right first permanent molar approaching the pulp. The molar showed incomplete root formation (Figure 1). The diagnosis was acute pulpitis at lower right first permanent molar. Partial pulpotomy was proposed to allow root formation.\n\nWe began with the administration of inferior alveolar nerve block (Table 1, item 1), followed by isolation of the tooth using a rubber dam (Table 1, item 2). Removal of caries using a suitable round carbide bur under a copious amount of water coolant was done, then spoon excavator was used to excavate pulp through the exposed part. To control bleeding, gentle flush to the wound with distilled water until bleeding was controlled was performed and a lightly packed cotton pellet was applied. MTA-Angelus (Table 1, item 3) was freshly mixed following manufacturer’s directions immediately before being placed and condensed gently over wet cotton against the fresh pulp wound. Excess material was scraped off the application of moistened cotton for 15 minutes to allow initial setting8. Subsequently, a self-cure glass ionomer (Table 1, item 4) was applied as a base material at 2 mm thickness. Final restoration using composite was performed (Table 1, item 5)6,9. A periapical radiograph was taken as a baseline record for comparison with follow-up appointments (Figure 2).\n\nOn the following day, a postoperative phone call to the patient’s parents revealed that the patient felt pain relief.\n\nAt one week follow-up, the tooth responded to thermal pulpal tests within reasonable limits. After three months, pulpal sensitivity test gave a normal reading, and clinical and radiographic examinations were normal. The patient continues to be followed up every three months with no complaints from the treated tooth for 12 months, and the root showed complete maturation (Figure 3). After that, the patient was lost to follow-up. Twenty-four months later the patient came back to the clinic for treatment of a different tooth, and a lower right first permanent molar examination showed clinical and radiographic success (Figure 4).\n\n\nDiscussion\n\nPartial pulpotomy technique obtains good clinical outcomes with different capping materials4,5. MTA is considered the gold standard for vital pulp therapy6. MTA has excellent sealing ability and biological properties that preserve the pulp viability in immature permanent teeth with irreversible pulpitis10,11. Partial pulpotomy using MTA, as opposed to root canal therapy or apexification, is more conservative and allows root maturation both in length and width4, and this was observed in our case report.\n\n\nConclusions\n\nMTA-Angelus partial pulpotomy appears to be a successful treatment for symptomatic immature permanent teeth with deep caries and vital pulps. However, we recommend conducting more clinical studies with a large sample size and longer follow-up period to validate our observations. Partial pulpotomy technique should also be tested in older ages with mature roots.\n\n\nConsent\n\nAfter the full explanation of the procedure, written informed consent was taken from the parent of the child.\n\nThe patient's mother gave written informed consent for the publication of this case report and any associated images.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nChailertvanitkul P, Paphangkorakit J, Sooksantisakoonchai N, et al.: Randomized control trial comparing calcium hydroxide and mineral trioxide aggregate for partial pulpotomies in cariously exposed pulps of permanent molars. Int Endod J. 2014; 47(9): 835–42. PubMed Abstract | Publisher Full Text\n\nMass E, Zilberman U: Long-term radiologic pulp evaluation after partial pulpotomy in young permanent molars. Quintessence Int. 2011; 42(7): 547–54. PubMed Abstract\n\nQudeimat MA, Barrieshi-Nusair KM, Owais AI: Calcium hydroxide vs mineral trioxide aggregates for partial pulpotomy of permanent molars with deep caries. Eur Arch Paediatr Dent. 2007; 8(2): 99–104. PubMed Abstract | Publisher Full Text\n\nVillat C, Grosgogeat B, Seux D, et al.: Conservative approach of a symptomatic carious immature permanent tooth using a tricalcium silicate cement (Biodentine): a case report. Restor Dent Endod. 2013; 38(4): 258–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAsgary S, Hassanizadeh R, Torabzadeh H, et al.: Treatment Outcomes of 4 Vital Pulp Therapies in Mature Molars. J Endod. 2018; 44(4): 529–35. PubMed Abstract | Publisher Full Text\n\nNosrat A, Seifi A, Asgary S: Pulpotomy in caries-exposed immature permanent molars using calcium-enriched mixture cement or mineral trioxide aggregate: a randomized clinical trial. Int J Paediatr Dent. 2013; 23(1): 56–63. PubMed Abstract | Publisher Full Text\n\nRoberts HW, Toth JM, Berzins DW, et al.: Mineral trioxide aggregate material use in endodontic treatment: a review of the literature. Dent Mater. 2008; 24(2): 149–64. PubMed Abstract | Publisher Full Text\n\nNagi P, El-Bardissy A, Bahgat S: A Comparative Clinical Study of MTA Vs Portland cement as Capping Materials in Pulpotomy of Primary Molars. 2012. Reference Source\n\nAsgary S, Eghbal MJ: Treatment outcomes of pulpotomy in permanent molars with irreversible pulpitis using biomaterials: a multi-center randomized controlled trial. Acta Odontol Scand. 2013; 71(1): 130–6. PubMed Abstract | Publisher Full Text\n\nÖzgür B, Uysal S, Güngör HC: Partial Pulpotomy in Immature Permanent Molars After Carious Exposures Using Different Hemorrhage Control and Capping Materials. Pediatr Dent. 2017; 39(5): 364–70. PubMed Abstract\n\nKang CM, Sun Y, Song JS, et al.: A randomized controlled trial of various MTA materials for partial pulpotomy in permanent teeth. J Dent. 2017; 60: 8–13. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "55633",
"date": "18 Nov 2019",
"name": "Papimon Chompu-inwai",
"expertise": [
"Reviewer Expertise Vital pulp therapy."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis case report report described success of partial pulpotomy with MTA-Angelus in irreversible pulpitis teeth of 8 years old child.\nHowever, the background of the case’s history, diagnostic tests, treatment given and outcomes were not described in sufficient detail.\nWhat kind of stimulus produce the pain in this patient? (cold water, sweet?)\n\nWas the pain lingering?\n\nWhat kind of pain killer did the child take? Did it help with the pain?\n\nAny sensibility tests (Cold or EPT test) was performed before treatment?\n\nThese information will lead to the diagnosis of the tooth. The diagnosis should be consistent throughout (acute pulpitis vs irreversible pulpitis).\n\nThe tooth should be diagnosed before prior to the treatment.\n\nWhat type of pulp exposure? Carious, mechanical?\n\nHow big was the exposure type?\n\nHow long did you control the bleeding?\n\nAt three months, pulpal sensitivity test was the EPT test? Please specify.\n\nAt 24 months, please be more specific of clinical and radiographic success.\nPlease address in discussion\nThe histology of irreversible pulpitis that allows the success of partial pulpotomy.\n\nMTA should not be described as a \"perfect\" dressing material. It has both desired properties and drawbacks. Besides, biocompatibility and sealing ability, other good biological properties (antibacterial, induce reparative dentin formation, less inflammation compared to calcium hydroxide, etc.) should also be described. Drawbacks should also be described (long setting time, discoloration, poor strength in early phase, etc).\n\nWhy did you choose to use glass ionomer on top of MTA?\nThere are several misspellings.\ncarious not curious.\n\nblunderbuss not blunder-bus.\n\nbiocompatibility not bio computability.\n\nMTA-Angelus not Angelos.\n\nIs the background of the case’s history and progression described in sufficient detail? Partly\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? Partly",
"responses": []
},
{
"id": "61788",
"date": "16 Apr 2020",
"name": "Hamdi Cem Güngör",
"expertise": [
"Reviewer Expertise Pediatric dentistry",
"dental traumatic injuries",
"preventive dentistry",
"fissure sealants",
"vital pulp treatments for primary and permanent teeth"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors report on the use partial pulpotomy procedure which they carried out on a symptomatic first permanent molars. As reported in the manuscript, the tooth presented symptoms of irreversible pulpitis which included acute provoked and lingering pain necessitating use of analgesics. This reviewer suggests major editing/revising on the manuscript in order to attain better readability.\n\nOf the sections of the manuscript, case report and discussion require major revision:\n\nIn case report section a detailed history of the case including examination, diagnosis, treatment and post-treatment could be presented.\n\nFor the discussion section, I would recommend addition of recent literature on the use of partial pulpotomy for teeth with irreversible pulpitis. The authors should also shed more light by discussing the treatment carried out and its outcome for that specific tooth.\n\n\"Partial pulpotomy technique should also be tested in older ages with mature roots.\" The sentence could be discarded from conclusion as it is irrelevant.\n\nProper citation of reference #8 should be made.\n\nThe manuscript could also highly benefit from a professional editing/revision service for better service to the readership.\n\nIs the background of the case’s history and progression described in sufficient detail? No\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? No\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? No\n\nIs the case presented with sufficient detail to be useful for other practitioners? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1616
|
https://f1000research.com/articles/7-1612/v1
|
08 Oct 18
|
{
"type": "Research Article",
"title": "Prophylactic potential of a Panchgavya formulation against certain pathogenic bacteria",
"authors": [
"Pooja Patel",
"Chinmayi Joshi",
"Snehal Funde",
"Hanumanthrao Palep",
"Vijay Kothari",
"Pooja Patel",
"Chinmayi Joshi",
"Snehal Funde",
"Hanumanthrao Palep"
],
"abstract": "A Panchgavya preparation was evaluated for its prophylactic efficacy against bacterial infection, employing the nematode worm Caenorhabditis elegans as a model host. Worms fed with the Panchgavya preparation prior to being challenged with pathogenic bacteria had a better survival rate against four out of five test bacterial pathogens, as compared to the control worms. Panchgavya feeding prior to bacterial challenge was found to be most effective against Staphylococcus aureus, resulting in 27% (p=0.0001) better worm survival. To the best of our awareness, this is the first report demonstrating in vivo prophylactic efficacy of Panchgavya mixture against pathogenic bacteria.",
"keywords": [
"Panchgavya",
"Prophylactic",
"Anti-infective",
"Caenorhabditis elegans"
],
"content": "Introduction\n\n‘Panchgavya’ is a term used to describe a combination of five major substances obtained from cow, including cow’s urine, milk, ghee (clarified butter), curd and dung. Dhanvantari, referred to as the God of Indian Medicine, is said to have offered to mankind this wonder medicine called Panchgavya. In Sanskrit, all its five ingredients are individually called ‘Gavya’ and collectively termed as Panchgavya (panch means five). Panchgavya products have been claimed to be beneficial in curing several human ailments, enhancing immunity and providing resistance to fight infections (Dhama et al., 2005). Panchgavya therapy (cowpathy) has been indicated as an alternate prophylactic and therapeutic modality for sound livestock and poultry health along with human health (Dhama et al., 2014). Panchgavya Prashan is a common tradition followed by certain communities (e.g. Telugu Brahmins) in India, wherein a Panchgavya dose is taken once every year during monsoon season. The potential applications of Panchgavya as antimicrobials, immune boosters, antidiabetics, anticancer, anticonvulsant, aphrodisiac, blood purifiers, and as a suitable medium to deliver medicines, have caught the attention of scientists and medical professionals (Dhama et al., 2014). In this context, we undertook an investigation on the prophylactic potential of a Panchgavya preparation against bacterial infections in the nematode host Caenorhabditis elegans.\n\n\nMethods\n\nThe Panchgavya formulation used in this study was prepared using a method that was different from the one practiced traditionally (which yields a fermented preparation). Fresh cow dung and urine, sourced from a cow fed on cottonseed and sugarcane grass, were mixed thoroughly in a glass beaker. This mix was allowed to stand for 10 min and subjected to filtration through a muslin cloth (the traditional method does not involve filtration). To this filtrate, fresh cow’s milk and fresh curd was added, and mixed until a uniform mixture was formed. Finally, cow ghee was added to this mixture and mixed thoroughly.\n\nDung, urine, and milk were all sourced from a single cow. From the same batch of milk, curd and ghee were prepared. Cream of this milk was boiled for 30–40 min and filtered; the filtrate was taken as ghee. For curd preparation, one part of this milk was inoculated with previous batch of curd (prepared using milk from the same cow by adding few drops of lemon juice to the milk) followed by overnight incubation at room temperature.\n\nThe ratio of dung:urine:milk:curd:ghee in this preparation was 1:2:3:3:1. This Panchagavya mixture was then transferred to a copper vessel (covered with a muslin cloth) and allowed to rest for 30 min. This was followed by freeze-drying at -20 °C to convert the preparation in powder form, which was stored under refrigeration (4–8°C) until used for the microbiological experiments. When required for use, the Panchgavya powder was suspended in sterile distilled water to attain OD625 = 0.10±0.01.\n\nPathogenic bacteria used in this study included: Staphylococcus aureus (MTCC 737); beta-lactamase producing multidrug resistant strains of Chromobacterium violaceum (MTCC 2656) and Serratia marcescens (MTCC 97); multidrug resistant Pseudomonas aeruginosa; and Streptococcus pyogenes (MTCC 1924). P. aeruginosa was sourced from our internal culture collection. All other cultures were procured from MTCC (Microbial Type Culture Collection, Chandigarh, India).\n\nC. elegans worms (received gift from the Biology Division, Sophia College, Mumbai) maintained on NGM (Nematode Growing Medium; 3 g/L NaCl, 2.5 g/L peptone, 1 M CaCl2, 1 M MgSO4, 5 mg/mL cholesterol, 1 M phosphate buffer of pH 6, 17 g/L agar-agar; this medium was prepared by us using the listed ingredients purchased from Merck, Mumbai or HiMedia, Mumbai) agar plate with E. coli OP50 (LabTIE B.V., JR Rosmalen, the Netherlands) as food, were kept unfed 24h prior to being used for experiments.\n\nThese worms were fed with Panchgavya by mixing this formulation (100 µL) with M9 medium (800 µL) and placed in a 24-well plate (sterile, non-treated polystyrene plates; HiMediaTPG24) containing 10 worms per well. Duration of exposure of worms to Panchgavya was kept 24, 48, 72 or 96 h, followed by addition of pathogenic bacteria (100 µL of bacterial suspension with OD764= 1.50). Appropriate controls i.e. worms previously not exposed to Panchgavya, but exposed to pathogenic bacteria; worms exposed neither to Panchgavya nor bacteria; and worms exposed to Panchgavya, but not to bacterial pathogens, were also included in the experiment. Incubation was carried out at 22°C.\n\nNumber of live vs. dead worms were counted every day for 5 days by putting the plate (with lid) under a light microscope (4X). Straight worms were considered to be dead. Plates were gently tapped to confirm lack of movement in the dead-looking worms. On the last day of the experiment, when plates could be opened, their death was also confirmed by touching them with a straight wire, wherein no movement was taken as confirmation of death.\n\nValues reported are means of four independent experiments, whose statistical significance was assessed using t-test performed through Microsoft Excel (2013). P values ≤0.05 were considered to be statistically significant.\n\n\nResults\n\nWorms fed on Panchgavya for 24 or 48 h registered no different (p>0.05) survival rates in the face of bacterial challenge as compared to control worms (Appendix A and Appendix B). However, worms with 72 or 96 h Panchgavya exposure registered a 15–27% (p<0.05) better survival upon challenge with different pathogenic bacteria, except for S. pyogenes as compared with control worms (Figure 1; Appendix C and Appendix D). These results demonstrate the prophylactic potential of Panchgavya against four different gram-positive and gram-negative bacterial infections, wherein previous exposure of C. elegans to this formulation was found to confer statistically significant protection on this worm against subsequent bacterial attack.\n\nPrevious exposure to Panchgavya (for 72 or 96 h) enabled C. elegans population to register better survival in the face of bacterial challenge: (A) 42.50±2.52% (p=0.001) better survival till third day, and 17.50±3.54% (p=0.002) better survival on fifth day, against P. aeruginosa; (B) 27.30±1.86% (p=0.0001) better survival on fifth day, against S. aureus; (C) 21.50±1.04% (p=0.0003) better survival on fifth day, against C. violaceum; (D) 23±1.50% (p=0.002) higher survival on fifth day, against S. marcescens; (E) Panchgavya-exposure was not found to confer any protection on C. elegans against S. pyogenes challenge.\n\nResults pertaining to 72 h and 96 h exposure of worms to Panchgavya, prior to bacterial challenge, were not statistically different. Values reported are means of four independent experiments, whose statistical significance was assessed using t-test performed through Microsoft Excel. P values ≤0.05 were considered to be statistically significant.\n\nHowever, when administered to C. elegans already infected by these pathogens, Panchgavya was not found to offer any survival benefit to the nematode host (Appendix E). Additionally, the Panchgavya-exposed worm population was able to generate progenies in absence as well as presence of pathogenic bacteria, which did not happen in control wells containing Panchgavya-unexposed worms, suggesting overall higher fitness of Panchgavya-exposed worms.\n\n\nConclusions\n\nThough there are few reports mentioning in vitro antimicrobial activity of either Panchgavya mixture (Gajbhiye et al., 2018) or its individual components (Deepika et al., 2016), to the best of our knowledge, the present study is the first report demonstrating in vivo anti-infective efficacy of Panchgavya mixture. The observed protective effect of Panchgavya against bacterial infection may in part stem from its immunomodulatory potential (Gajbhiye et al., 2015). This short study validates the therapeutic potential of Panchgavya mentioned in Ayurved (Susruta Samhita, 1885). Further studies for characterization (e.g. generating its metagenomic, which may reveal presence of beneficial microbes, and chemical profile) of this ancient formulation can provide insights into the mechanisms underlying its anti-infective efficacy.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data has been provided in Appendices A-E., http://dx.doi.org/10.5256/f1000research.16485.d220622 (Patel et al., 2018).\n\nAppendix A: Bacterial challenge to C. elegans fed on Panchgavya for 24 h\n\nAppendix B: Bacterial challenge to C. elegans fed on Panchgavya for 48 h\n\nAppendix C: Bacterial challenge to C. elegans fed on Panchgavya for 72 h\n\nAppendix D: Bacterial challenge to C. elegans fed on Panchgavya for 96 h\n\nAppendix E: Panchgavya tested as a therapy for already infected C. elegans",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nAuthors thank Nirma Education and Research Foundation (NERF, Ahmedabad) for financial and infrastructural support.\n\n\nReferences\n\nDeepika M, Nashima K, Rajeswari S: Antimicrobial activity of panchagavya against urinary tract infection. Int J Curr Pharm Res. 2016; 8(3): 68–70. Reference Source\n\nDhama K, Khurana SK, Karthik K, et al.: Panchgavya: Immune-enhancing and therapeutic perspectives. J Immunolol Immunopathol. 2014; 16: 1–11. Publisher Full Text\n\nDhama K, Rathore R, Chauhan RS, et al.: Panchgavya (Cowpathy): an overview. International Journal of Cow Science. 2005; 1(1): 1–15. Reference Source\n\nGajbhiye SP, More J, Kolte D, et al.: Antimicrobial activities (antibacterial and antifungal) effect of panchagavya alone prepared by fermentation method. 2018; 7(7): 1336–1349. Reference Source\n\nGajbhiye SP, Padmanabhan U, Kothari S, et al.: Immunostimulant activity of a medical preparation panchagavya. Int J Res Pharm Sci. 2015; 5(3): 1–5. Reference Source\n\nPatel P, Joshi C, Funde S, et al.: Dataset 1 in: Prophylactic potential of a Panchgavya formulation against certain pathogenic bacteria. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16485.d220622\n\nSusruta Samhita- The Medical Science of the Ancient Aryans. TR. and Ed. A.C Bandopadhyaya, 2nd ed. Calcutta, 1885. Reference Source"
}
|
[
{
"id": "39211",
"date": "19 Oct 2018",
"name": "Subramani Parasuraman",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors have used beta-lactamase producing antibiotic-resistant strains in their study. That is good, as resistant strains of P. aeruginosa and S. marcescens have also been listed by WHO as of high priority. Use of C. elegans as a model host by authors is also logical, as there is some overlap among the virulence factors of pathogenic bacteria damaging C. elegans, and those damaging human cells. While the current study may be approved for publication as a research note, in future authors should try to investigate the mechanisms through which 'Panchgavya' imparts protection to C. elegans against infectious bacteria.",
"responses": []
},
{
"id": "39214",
"date": "26 Oct 2018",
"name": "Prasun Kumar",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAntibiotic-resistant infections are a global threat, and novel antibiotic, as well as non-antibiotic approaches to deal with the problem of antimicrobial resistance (AMR), are urgently needed. In this pursuit, taking insights from traditional wisdom seems to be a logical effort. This particular research note describes an investigation of the prophylactic efficacy of a traditional Indian formulation - Panchgavya, and using the nematode worm model has demonstrated such a prophylactic effect to be there against some of the pathogenic bacteria. I appreciate that they have selected important pathogens like P. aeruginosa, and S. marcescens (Enterobacteriaceae).\n\nDespite Panchgavya being an ancient formulation, it seems to be under-investigated by modern-day scientists. Studies like this one attempting to validate the traditional medicine claims are welcome. As mentioned in the conclusion part by authors, they should take up in near future further characterization of this formulation. If such a standardized formulation can be made available for public use, it may help in reducing the overall infection burden of human populations.\n- The formulation concentration [OD625 = 0.10±0.01], is it right? The authors may use w/v units for clarity. - Will there be any effect of different doses of Panchgavya? - There are many interesting questions that are yet to be addressed, most possibly by taking up this research further and analyzing the mechanism, bioactive component etc. - An independent experiment similar to MIC using a CFU-based method for each of the selected pathogens would have shed more light. - Differential behavior on Gram +ve bacteria is quite interesting and must be studied further.",
"responses": []
},
{
"id": "39648",
"date": "31 Dec 2018",
"name": "Neha Jain",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study on Panchgavya by Patel et al. is an interesting observation. The formulation seems to have positive effect on the worms with bacterial infections, however, at this point the authors have not elucidated the mechanism. A follow-up study could answer the following questions:\n1. What are the genes that Panchgavya may target? 2. Is the protection by killing the bacteria? 3. Does Panchgavya change antibiotic resistant patterns in the bugs used in the study? 4. Can it be used with polymicrobial infections and biofilm forming bacteria?\nAdditional comments:\n1. The study design is appropriate and the work is technically sound. 2. The conclusions drawn are adequately supported by the results.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1612
|
https://f1000research.com/articles/6-2055/v1
|
28 Nov 17
|
{
"type": "Method Article",
"title": "Differential methylation analysis of reduced representation bisulfite sequencing experiments using edgeR",
"authors": [
"Yunshun Chen",
"Bhupinder Pal",
"Jane E. Visvader",
"Gordon K. Smyth",
"Yunshun Chen",
"Bhupinder Pal",
"Jane E. Visvader"
],
"abstract": "Studies in epigenetics have shown that DNA methylation is a key factor in regulating gene expression. Aberrant DNA methylation is often associated with DNA instability, which could lead to development of diseases such as cancer. DNA methylation typically occurs in CpG context. When located in a gene promoter, DNA methylation often acts to repress transcription and gene expression. The most commonly used technology of studying DNA methylation is bisulfite sequencing (BS-seq), which can be used to measure genomewide methylation levels on the single-nucleotide scale. Notably, BS-seq can also be combined with enrichment strategies, such as reduced representation bisulfite sequencing (RRBS), to target CpG-rich regions in order to save per-sample costs. A typical DNA methylation analysis involves identifying differentially methylated regions (DMRs) between different experimental conditions. Many statistical methods have been developed for finding DMRs in BS-seq data. In this workflow, we propose a novel approach of detecting DMRs using edgeR. By providing a complete analysis of RRBS profiles of epithelial populations in the mouse mammary gland, we will demonstrate that differential methylation analyses can be fit into the existing pipelines specifically designed for RNA-seq differential expression studies. In addition, the edgeR generalized linear model framework offers great flexibilities for complex experimental design, while still accounting for the biological variability. The analysis approach illustrated in this article can be applied to any BS-seq data that includes some replication, but it is especially appropriate for RRBS data with small numbers of biological replicates.",
"keywords": [
"Methylation",
"BS-seq",
"differential methylation analysis",
"Bioconductor"
],
"content": "Introduction\n\nStudies in the past have shown that DNA methylation, as an important epigenetic factor, plays a vital role in genomic imprinting, X-chromosome inactivation and regulation of gene expression1. Aberrant DNA methylation is often correlated with DNA instability, which leads to development of diseases including imprinting disorders and cancer2,3.\n\nIn mammals, DNA methylation almost exclusively occurs at CpG sites, i.e. regions of DNA where a cytosine (C) is linked by a phosphate (p) and bond to a guanine (G) in the nucleotide sequence from 5’ to 3’. It has been found that 70% ∼ 80% of CpG cytosines are methylated in mammals, regardless of the cell type4. Unmethylated CpGs usually group together in clusters of regions known as CpG islands5, which cover about 2% of the entire genome. Around 40% of mammalian genes and 70% of human genes have CpG islands enriched in their promoter regions6–8. CpG methylation in gene promoters is generally associated with repression of transcription, and hence silencing of gene expression5. When occurring at the promoters of tumor suppressor gene, DNA methylation could repress the tumour suppressors, leading to oncogenesis3. In contrast, high levels of methylation have been observed in the gene body of highly expressed genes9, which implies positive correlation between gene body methylation and gene expression.\n\nAmong numerous existing technologies, the most widely used method to investigate DNA methylation is bisulfite sequencing (BS-seq), which produces data on the single-nucleotide scale10. Unmethylated cytosines (C) are converted to Uracils (U) by sodium bisulfite and then deaminated to thymines (T) during PCR amplification. Methylated Cs, on the other hand, remain intact after bisulfite treatment. The BS-seq technique can be used to measure genome-wide single-cytosine methylation levels by sequencing the entire genome. This strategy produces whole genome bisulfite sequencing (WGBS) data. However, the WGBS approach could be cost-prohibitive for species, such as human, with large genome. In addition, the fact that CpG islands reside in only 2% of the entire genome makes the WGBS approach inefficient when comparing a large number of samples.\n\nTo improve the efficiency and bring down the scale and cost of WGBS, enrichment strategies have been developed and combined with BS-seq to target a specific fraction of the genome. A common targeted approach is reduced representation bisulfite sequencing (RRBS) that targets CpG-rich regions11. Under the RRBS strategy, small fragments that compose only 1% of the genome are generated using MspI digestion, which means fewer reads are required to obtain accurate sequencing. The RRBS approach can capture approximately 70% of gene promoters and 85% of CpG islands, while requiring only small quantities of input sample12. In general, RRBS has great advantages in cost and efficiency when dealing with large scale data, whereas WGBS is more suitable for studies where all CpG islands or promoters across the entire genome are of interest.\n\nThe first step of analyzing BS-seq data is to align short reads to genome. The number of C-to-T conversions are then counted for all the mapped reads. A number of software tools have been developed for the purposes of read mapping and methylation calling of BS-seq data. Popular ones include Bismark13, MethylCoder14, BRAT15, BS-Seeker16 and BSMAP17. Most of the software tools rely on existing short read aligners, such as Bowtie18.\n\nTypical downstream DNA methylation studies often involve finding differentially methylated regions (DMRs) between different experimental conditions. A number of statistical methods and software packages have been developed for detecting DMRs using the BS-seq technology. methylkit19 and RnBeads20 implement Fisher’s Exact Test, which is a popular choice for two-group comparisons with no replicates. In the case of complex experimental designs, regression methods are widely used to model methylation levels or read counts. RnBeads offers a linear regression approach based on the moderated t-test and empirical Bayes method implemented in limma21. BSmooth22 is another analysis pipeline that uses linear regression and empirical Bayes together with a local likelihood smoother. methylkit also has an option to apply logistic regression with overdispersion correction19. Some other methods have been developed based on beta-binomial distribution to achieve better variance modelling. For example, DSS fits a Bayesian hierarchical beta-binomial model to BS-seq data and uses Wald tests to detect DMRs23. Other software using beta-binomial model include BiSeq24, MOABS25 and RADMeth26.\n\nIn this workflow, we demonstrate an edgeR approach of differential methylation analysis. edgeR is one of the most popular Bioconductor packages for assessing differential expression in RNA-seq data27. It is based on the negative binomial (NB) distribution and it models the variation between biological replicates through the NB dispersion parameter. Unlike other approaches to methylation sequencing data, the analysis explained in this workflow keeps the counts for methylated and unmethylated reads as separate observations. edgeR linear models are used to fit the total read count (methylated plus unmethylated) at each genomic locus, in such a way that the proportion of methylated reads at each locus is modelled indirectly as an over-dispersed binomial distribution. This approach has a number of advantages. First, it allows the differential methylation analysis to be undertaken using existing edgeR pipelines developed originally for RNA-seq differential expression analyses. The edgeR generalized linear model (GLM) framework offers great flexibility for analysing complex experimental designs while still accounting for the biological variability. Second, keeping methylated and unmethylated read count as separate data observations allows the inherent variability of the data to be modeled more directly and perhaps more realistically. Differential methylation is assessed by likelihood ratio tests so we do not need to assume that the log-fold-changes or other coefficient estimators are normally distributed.\n\nThis article presents an analysis of an RRBS data set generated by the authors containing replicated RRBS profiles of basal and luminal cell populations from the mouse mammary epithelium. As with other articles in the Bioconductor Gateway series, our aim is to provide an example analysis with complete start to finish code. As with other Bioconductor workflow articles, we illustrate one analysis strategy in detail rather than comparing different pipelines. The analysis approach illustrated in this article can be applied to any BS-seq data that includes some replication, but is especially appropriate for RRBS data with small numbers of biological replicates. The results shown in this article were generated using Bioconductor Release 3.6.\n\n\nThe NB linear modeling approach to BS-seq data\n\nTo introduce the edgeR linear modeling approach to BS-seq data, consider a genomic locus that has mA methylated and uA unmethylated reads in condition A and mB methylated and uB unmethylated reads in condition B. Our approach is to model all four counts as NB distributed with the same dispersion but different means. Suppose the data is as given in Table 1. If this were a complete dataset, then it could be analyzed in edgeR as follows.\n\n\n\nIn this analysis, the first two coefficients are used to model the total number of reads (methylated or unmethylated) for samples 1 and 2, respectively. Coefficient 3 (A_MvsU) estimates the log ratio of methylated to unmethylated reads for sample 1, a quantity that can also be viewed as the logit proportion of methylated reads in sample 1. Coefficient 4 (BvsA_MvsU) estimates the difference in logit proportions of mythylated reads between conditions B and A. The difference in logits is estimated here as 8.99 on the log2 scale. The P-value for differential methylation (B vs A) is P = 5.27 × 10-6.\n\nThe dispersion parameter controls the degree of biological variability28. If we had set dispersion=0 in the above code, then the above analysis would be exactly equivalent to a logistic binomial regression, with the methylated counts as responses and the total counts as sizes, and with a likelihood ratio test for a difference in proportions between conditions A and B. Positive values for the dispersion produce over-dispersion relative to the binomial distribution. We have set the dispersion here equal to the value that is estimated below for the mammary epithelial data.\n\nIn the above code, the two library sizes for each sample should be equal. Otherwise, the library size values are arbitrary and any settings would have lead to the same P-value.\n\nIt is interesting to compare this approach with beta-binomial modeling. It is well known that if m and u are independent Poisson random variables with means µm and µu, then the conditional distribution of m given m + u is binomial with success probability p = µm /(µm + µu). If the Poisson means µm and µu themselves follow gamma distributions, then the marginal distributions of m and u are NB instead of Poisson. If the two NB distributions have different dispersions, and have expected values in inverse proportion to the dispersions, then the conditional distribution of m given m + u follows a beta-binomial distribution. The approach taken in this article is closely related to the beta-binomial approach but makes different and seemingly more natural assumptions about the NB distributions. We instead assume the two NB distributions to have the same dispersion but different means. The NB linear modeling approach allows the means and dispersions of the two NB distributions to be estimated separately, in concordance with the data instead of being artificially linked.\n\n\nDescription of the biological experiment\n\nThe epithelium of the mammary gland exists in a highly dynamic state, undergoing dramatic morphogenetic changes during puberty, pregnancy, lactation, and regression29. Characterization of the lineage hierarchy of cells in the mammary epithelium is an important step toward understanding which cells are predisposed to oncogenesis. In this study, we profiled the methylation status of the two major functionally distinct epithelial compartments: basal and luminal cells. The basal cells were further divided into those showing high or low expression of the surface marker Itga5 as part of our investigation of heterogeneity within the basal compartment. We carried out global RRBS DNA methylation assays on two biological replicates of each of the three cell populations to determine whether the epigenetic machinery played a potential role in (i) differentiation of luminal cells from basal and (ii) any compartmentalization of the basal cells associated with Itga5.\n\nInguinal mammary glands (minus lymph node) were harvested from FVB/N mice. All animal experiments were conducted using mice bred at and maintained in our animal facility, according to the Walter and Eliza Hall Institute of Medical Research Animal Ethics Committee guidelines. Epithelial cells were suspended and fluorescence-activated cell sorting (FACS) was used to isolate basal and luminal cell populations30. Genomic DNA (gDNA) was extracted from freshly sorted cells using the Qiagen DNeasy kit. Around 25ng gDNA input was subjected to DNA methylation analysis by BS-seq using the Ovation RRBS Methyl-seq kit from NuGEN. The process includes MspI digestion of gDNA, sequencing adapter ligation, end repair, bisulfite conversion, and PCR amplification to produce the final sequencing library. The Qiagen EpiTect Bisulfite kit was used for bisulfite-mediated conversion of unmethylated cytosines.\n\nThere are three groups of samples: luminal population, Itga5- basal population and Itga5+ basal population. Two biological replicates were collected for each group. This experimental design is summarized in the table below.\n\n\n\nThe experiment has a simple one-way layout with three groups. A single grouping factor is made as follows:\n\n\n\nThe sequencing was carried out on the Illumina NextSeq 500 platform. About 30 million 75bp paired-end reads were generated for each sample.\n\n\nDifferential methylation analysis at CpG loci\n\nThe first step of the analysis is to map the sequencing reads from the FASTQ files to the mouse genome and then perform methylation calls. Though many options are available, we use Bismark for read alignment and methylation calling. Bismark is one of the most popular software tools to perform alignments of bisulfite-treated sequencing reads to a genome of interest and perform methylation calls. It maps sequencing reads using the short read aligner Bowtie 118 or alternatively Bowtie 231.\n\nTo increase alignment rates and reduce false methylation calls, it is recommended to trim poor quality reads on sequence ends and remove adapters that can be potentially sequenced prior to the alignment. This is done using trim_galore (https://www.bioinformatics.babraham.ac.uk/projects/trim_galore/). After that, Bismark version v0.13.0 is used to align the reads to the mouse genome mm10. The final methylation calls are made using bismark_methylation_extractor.\n\nThe Bismark outputs include one coverage bed file of the methylation in CpG context for each sample. The coverage outputs from Bismark are available at http://bioinf.wehi.edu.au/edgeR/F1000Research2017/. Readers wishing to reproduce the analysis presented in this article can download the zipped coverage bed files produced by Bismark from the above link.\n\nBed files can be read into R using read.delim as for txt files. Each of the bed files has the following format:\n\n\n\nThe columns in the bed file represent: V1: chromosome number; V2: start position of the CpG site; V3: end position of the CpG site; V4: methylation proportion; V5: number of methylated Cs; V6: number of unmethylated Cs.\n\nSince the start and end positions in the coverage outputs are identical for each CpG site, only one of them is needed for marking the location of each. We also ignore the methylation proportion as it can be directly calculated from the number of methylated and unmethylated Cs. The data can then be read into a list in R:\n\n\n\nThe data object is a list containing six data frames, each of which represents one sample. The first and second columns of each data frame are the chromosome numbers and positions of all the CpG loci observed in that sample. The last two columns contain the numbers of methylated and unmethylated Cs detected at those loci. Since the number of reported CpG loci varies across different samples, care is required to combine the information from all the samples. We first obtain all unique CpG loci observed in at least one of the six samples. This is done by combining the chromosome number and position of each CpG site. Then we extract read counts of methylated and unmethylated Cs at these locations across all the samples and combine them into a count matrix.\n\n\n\nThe counts object is a matrix of integer counts with 12 columns, two for each sample. The odd number of columns contain the numbers of methylated Cs, whereas the even number of columns contain the numbers of unmethylated Cs. The genomic positions are used as the row names of the count matrix.\n\n\n\nWe then proceed to the edgeR analysis of the methylation data. The edgeR package stores data in a simple list-based data object called a DGEList. We first create a DGEList object using the count matrix generated before. The information of CpG sites is converted into a data frame and stored in the genes component of the DGEList object.\n\n\n\nWe first sum up the read counts of both methylated and unmethylated Cs at each CpG site within each sample.\n\n\n\nCpG loci that have very low counts across all the samples shall be removed prior to downstream analysis as they provide little information for assessing methylation levels. As a rule of thumb, we require a CpG site to have a total count (both methylated and unmethylated) of at least 10 across all the samples before it is considered in the study.\n\n\n\nThe DGEList object is subsetted to retain only the non-filtered loci:\n\n\n\nThe option keep.lib.sizes=FALSE causes the library sizes to be recomputed after the filtering. This is generally recommended, although the effect on the downstream analysis is usually small.\n\nA key difference between BS-seq and other sequencing data is that the pair of libraries holding the methylated and unmethylated reads for a particular sample are treated as a unit. To ensure that the methylated and unmethylated reads for the same sample are treated on the same scale, we need to set the library sizes to be equal for each pair of libraries. We set the library sizes for each sample to be the average of the total read counts for the methylated and unmethylated libraries:\n\n\n\nOther normalization methods developed for RNA-seq data, such as TMM32, are not required for BS-seq data.\n\nIn DNA methylation studies, methylation levels are of most interest. For Illumina methylation assay, two common measurements of methylation levels are β-values and M-values, which are defined as β = M /(M+U) and M-value= log2(M /U) where M and U denote the methylated and unmethylated intensity33. Here we adopt the same idea and extend the two measurements to BS-seq data. That is, denote the methylated and unmethylated Cs by M and U respectively, and define the β-values and M-values in the same way as above.\n\nIn practice, for a particular CpG site in one sample, the M-value can be computed by subtracting the log2 count-per-million (CPM) of the unmethylated Cs from that of the methylated Cs. This is equivalent to the calculation of the defined M-values as the library sizes are set to be the same for each pair of methylated and unmethylated columns and they cancel each other out in the subtraction. A prior count of 2 is added to the calculation of log2-CPM to avoid undefined values and to reduce the variability of M-values for CpG sites with low counts. The calculation of β-value is straight-forward though a small offset may also be added to the calculation.\n\n\n\nThe outputs Beta and M are numeric matrices with six columns, each of which contains the β-values or M-values calculated at each CpG site in one sample. Then we can generate multi-dimensional scaling (MDS) plots to explore the overall differences between the methylation levels of the different samples. Here we decorate the MDS plots to indicate the cell groups:\n\n\n\nFigure 1 shows the resulting plots. In these plots, the distance between each pair of samples represents the average log-fold change between the samples for the top most differentially methylated CpG loci between that pair of samples. (We call this average the leading log-fold change.) The two replicate samples from the luminal population (P6) are seen to be well separated from the four basal samples (populations P7 and P8).\n\nMethylation levels are measured in beta values (left) and M-values (right). Samples are separated by the cell population in the first dimension in both MDS plots.\n\nOne aim of this study is to identify differentially methylated regions (DMRs) between different groups. In edgeR, this can be done by fitting linear models under a specified design matrix and testing for corresponding coefficients or contrasts. Here, a design matrix is constructed as follows:\n\n\n\nThe first six columns represent the sample effect. It accounts for the fact that each pair of columns of the count matrix are from one of the six samples. The 7th column “Me\" represents the methylation level (in M-value) in the P6 group. The 8th column “Me2\" represents the difference in methylation level between the P7 and P6 groups. Finally, the last column “Me3\" represents the difference in methylation level between the P8 and P6 groups.\n\nWith the design matrix specified, we can now proceed to the standard edgeR pipeline and analyze the data in the same way as for RNA-seq data. Similar to the RNA-seq data, the variability between biological replicates has also been observed in bisulfite sequencing data. This variability can be captured by the NB dispersion parameter under the generalized linear model (GLM) framework in edgeR.\n\nThe mean-dispersion relationship of BS-seq data has been studied in the past and no apparent mean-dispersion trend was observed23. This is also verified through our own practice. Therefore, we would not consider a mean-dependent dispersion trend as we normally would for RNA-seq data. A common dispersion estimate for all the loci, as well as an empirical Bayes moderated dispersion for each individual locus, can be obtained from the estimateDisp function in edgeR:\n\n\n\nThis returns a DGEList object with additional components (common.dispersion and tagwise.dispersion) added to hold the estimated dispersions. Here the estimation of trended dispersion has been turned off by setting trend=\"none\". For this data, the estimated prior degrees of freedom (df) are infinite for all the loci, which implies all the CpG-wise dispersions are exactly the same as the common dispersion. A BCV plot is often useful to visualize the dispersion estimates, but it is not informative in this case.\n\nWe first fit NB GLMs for all the CpG loci using the glmFit function in edgeR.\n\n\n\nThen we can proceed to testing for differentially methylated CpG sites between different populations. One of the most interesting comparisons is between the basal (P7 and P8) and luminal (P6) groups. The contrast corresponding to any specified comparison can be constructed conveniently using the makeContrasts function:\n\n\n\nThe actual testing is performed using likelihood ratio tests (LRT) in edgeR.\n\n\n\nThe top set of most differentially methylated (DM) CpG sites can be viewed with topTags:\n\n\n\nHere positive log-fold changes represent CpG sites that have higher methylation level in the basal population compared to the luminal population. The Benjamini-Hochberg multiple testing correction is applied to control the false discovery rate (FDR).\n\nThe total number of DM CpG sites identified at an FDR of 5% can be shown with decideTestsDGE. There are in fact more than 50,000 differentially methylated CpGs in this comparison:\n\n\n\nThe differential methylation results can be visualized with an MD plot (see Figure 2):\n\nSignificantly up and down methylated CpGs are highlighted in red and blue, respectively.\n\nThe logFC of the methylation level for each CpG site is plotted against the average abundance in log2-CPM. Significantly differentially methylated CpGs are highlighted.\n\n\n\n\nDifferential methylation in gene promoters\n\nThe majority of CpGs are methylated in mammals. On the other hand, unmethylated CpGs tend to group into clusters of CpG islands, which are often enriched in gene promoters. CpG methylation in promoter regions is often associated with silencing of transcription and gene expression5. Therefore it is of great biological interest to examine the methylation level within the gene promoter regions.\n\nFor simplicity, we define the promoter of a gene as the region from 2kb upstream to 1kb downstream of the transcription start site of that gene. The genomic locations and their associated annotations of the promoters can be obtained using the TxDb.Mmusculus.UCSC.mm10.knownGene package.\n\n\n\nHere, pr is a GRanges class object that contains the genomic ranges of the promoters of all the known mouse genes in the annotation package.\n\nWe create another GRanges class object sites, which contains the genomic locations of all the observed CpG sites.\n\n\n\nThen we find the overlaps between the gene promoter regions and all the CpG sites in the data using findOverlaps.\n\n\n\nThe queryHits component of olap marks the indices of the promoter region as in pr, whereas the subjectHits component contains the indices of the CpG sites as in sites that overlap with the corresponding promoter regions.\n\nThe numbers of methylated and unmethylated CpGs overlapping with gene promoters are summed up for each promoter.\n\n\n\nThe integer matrix counts2 contains the total numbers of methylated and unmethylated CpGs observed within the promoter of each gene. Same as before, counts2 has 12 columns, two for each sample. The odd number of columns contain the numbers of methylated Cs, whereas the even number of columns contain the numbers of unmethylated Cs. The only difference is that each row of counts2 now represents a gene promoter instead of an individual CpG site.\n\nThe gene symbol information can be added to the annotation using the org.Mm.eg.db package. A DGEList object is created for the downstream edgeR analysis.\n\n\n\nWe sum up the read counts of both methylated and unmethylated Cs at each CpG sites within each sample.\n\n\n\nFiltering is performed in the same way as before. Since each row represents a 3,000-bp-wide promoter region that contains multiple CpG sites, we would expect less filtering than before.\n\n\n\n\n\nSame as before, we do not perform normalization but set the library sizes for each sample to be the average of the total read counts for the methylated and unmethylated libraries.\n\n\n\nSame as before, we measure the methylation levels of gene promoter regions using both β-values and M-values. A prior count of 2 is added to the calculation of log2-CPM to avoid undefined values and to reduce the variability of M-values for gene promoters with low counts. Then MDS plots are produced to examine the overall differences between the methylation levels of the different samples.\n\n\n\nThe resulting Figure 3 shows that the two replicate samples from the luminal population (P6) are well separated from the four replicate samples from the basal population (P7 and P8).\n\nMethylation levels are measured in beta values (left) and M-values (right). Samples are separated by the cell population in the first dimension in both MDS plots.\n\nWe estimate the NB dispersions using the estimateDisp function in edgeR. For the same reason, we do not consider a mean-dependent dispersion trend as we normally would for RNA-seq data.\n\n\n\nThe dispersion estimates can be visualized with a BCV plot (see Figure 4):\n\n\n\nThe plot shows the square-root estimates of the common and tagwise NB dispersions.\n\nWe first fit NB GLMs for all the gene promoters using glmFit.\n\n\n\nThen we can proceed to testing for differentially methylation in gene promoter regions between different populations. Suppose the comparison of interest is same as before. The same contrast can be used for the testing.\n\n\n\nThe top set of most differentially methylated gene promoters can be viewed with topTags:\n\n\n\nHere positive log-fold changes represent gene promoters that have higher methylation level in the basal population compared to the luminal population. The Benjamini-Hochberg multiple testing correction is applied to control the false discovery rate (FDR).\n\nThe total number of DM gene promoters identified at an FDR of 5% can be shown with decideTestsDGE. There are in fact about 1,200 differentially methylated gene promoters in this comparison:\n\n\n\nThe differential methylation results can be visualized with an MD plot (see Figure 5):\n\n\n\nSignificantly up and down methylated gene promoters are highlighted in red and blue, respectively.\n\n\nCorrelate with RNA-seq profiles\n\nTo show that DNA methylation (particularly in the promoter regions) represses gene expression, we relate the differential methylation results to the gene expression profiles of the RNA-Seq data. The RNA-seq data used here is from a study of the epithelial cell lineage in the mouse mammary gland34, in which the expression profiles of basal stem-cell enriched cells and committed luminal cells in the mammary glands of virgin, pregnant and lactating mice were examined. The complete differential expression analysis of the data is described in Chen et al.35.\n\nThe RNA-seq data is stored in the format of a DGEList object y_rna and saved in a RData file rna.RData. The object y_rna contains the count matrix, sample information, gene annotation, design matrix and dispersion estimates of the RNA-seq data. The gene filtering, normalization and dispersion estimation were performed in the same way as described in Chen et al.35. The DE analysis between the basal and luminal in the virgin mice was performed using glmTreat with a fold-change threshold of 3. The results are saved in the spread sheet BvsL-fc3.csv. Both rna.RData and BvsL-fc3.csv are available for download at http://bioinf. wehi.edu.au/edgeR/F1000Research2017/.\n\nWe load the RData file and read in the DE results from the spread sheet.\n\n\n\nWe select the genes of which the promoters are significantly DM (FDR < 0.05) and examine their expression level in the RNA-Seq data. A data frame object lfc is created to store the gene information, log-fold change of methylation level and log-fold change of gene expression of the selected genes.\n\n\n\nThe Pearson correlation coefficient between the two log-fold changes of the selected genes is estimated. The result shows high negative correlation between gene expression and methylation in gene promoters.\n\n\n\nThe log-fold changes of the selected genes from the two datasets are plotted against each other for visualization (see Figure 6):\n\n\n\nThe plot shows results for the genes of which the promoters are significantly differentially methylated between basal and luminal. The red line shows the least squares line with zero intercept. A strong negative correlation is observed.\n\nThe horizontal axis of the scatterplot shows the log-fold change in methylation level for each gene while the vertical axis shows the log-fold change in expression. To assess the correlation, we fit a least squares regression line through the origin and compute the p-value:\n\n\n\nThe negative association is highly significant (P = 10–47). The last line of code adds the regression line to the plot (Figure 6).\n\nA rotation gene set test can be performed to further examine the relationship between gene expression and methylation in gene promoters. This is to test whether the set of genes (i.e., genes of which the promoters are differentially methylated) are differentially expressed (DE) and in which direction they are DE.\n\nThe indices are made by matching the Entrez Gene Ids between the two datasets. The log-fold changes of methylation level in gene promoters are used as weights for those genes. The test is conducted using the fry function in edgeR. The contrast is set to compare basal with luminal in virgin mice.\n\n\n\nThe small PValue indicates the significant testing result. The result Down in the Direction column indicates negative correlation between the methylation and gene expression.\n\nWe can visualize the gene set results with a barcode plot (see Figure 7):\n\n\n\nIn the barcode plot, genes are sorted left to right according to expression changes. Genes up-regulated in luminal are on the left and genes up-regulated in basal are on the right. The x-axis shows the expression log2-fold change between basal and luminal. The vertical red bars indicate genes up-methylated in basal and vertical blue bars indicate genes down-methylated in basal. The variable-height vertical bars represent the methylation log-fold changes. The red and blue worms measure relative enrichment, showing that increased methylation is associated with decreased regulation and down-methylation is associated with up-regulation. In other words, there is a negative association between methylation of promotor regions and expression of the corresponding gene.\n\n\nPackages used\n\nThis workflow depends on various packages from version 3.6 of the Bioconductor project, running on R version\n\n3.4.0 or higher. Most of the workflow also works with Bioconductor 3.5, but the code in the last section (Correlate with RNA-seq samples) requires some minor changes for use with Bioconductor 3.5 because the earlier version of topTags did not preserve row names in the output table. A complete list of the packages used for this workflow is shown below:\n\n\n\n\nData and software availability\n\nAll data and supporting files used in this workflow are available from: http://bioinf.wehi.edu.au/edgeR/F1000Research2017\n\nArchived code/data as at time of publication: http://doi.org/10.5281/zenodo.105287136\n\nAll software used is publicly available as part of Bioconductor 3.6.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the National Health and Medical Research Council (Fellowship 1058892 and Program 1054618 to G.K.S, Independent Research Institutes Infrastructure Support to the Walter and Eliza Hall Institute) and by a Victorian State Government Operational Infrastructure Support Grant.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors thank Andrew Keniry for help on Bismark.\n\n\nReferences\n\nBird A: Perceptions of epigenetics. Nature. 2007; 447(7143): 396–8. PubMed Abstract | Publisher Full Text\n\nJones PA, Laird PW: Cancer epigenetics comes of age. Nat Genet. 1999; 21(2): 163–7. PubMed Abstract | Publisher Full Text\n\nJones PA, Baylin SB: The fundamental role of epigenetic events in cancer. Nat Rev Genet. 2002; 3(6): 415–28. PubMed Abstract | Publisher Full Text\n\nJabbari K, Bernardi G: Cytosine methylation and CpG, TpG (CpA) and TpA frequencies. Gene. 2004; 333: 143–149. PubMed Abstract | Publisher Full Text\n\nBird AP: CpG-rich islands and the function of DNA methylation. Nature. 1986; 321(6067): 209–213. PubMed Abstract | Publisher Full Text\n\nFatemi M, Pao MM, Jeong S, et al.: Footprinting of mammalian promoters: use of a CpG DNA methyltransferase revealing nucleosome positions at a single molecule level. Nucleic Acids Res. 2005; 33(20): e176. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeaton AM, Bird A: CpG islands and the regulation of transcription. Genes Dev. 2011; 25(10): 1010–1022. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaxonov S, Berg P, Brutlag DL: A genome-wide analysis of CpG dinucleotides in the human genome distinguishes two distinct classes of promoters. Proc Natl Acad Sci U S A. 2006; 103(5): 1412–1417. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLister R, Pelizzola M, Dowen RH, et al.: Human DNA methylomes at base resolution show widespread epigenomic differences. Nature. 2009; 462(7271): 315–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrommer M, McDonald LE, Millar DS, et al.: A genomic sequencing protocol that yields a positive display of 5-methylcytosine residues in individual DNA strands. Proc Natl Acad Sci U S A. 1992; 89(5): 1827–1831. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeissner A, Gnirke A, Bell GW, et al.: Reduced representation bisulfite sequencing for comparative high-resolution DNA methylation analysis. Nucleic Acids Res. 2005; 33(18): 5868–5877. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGu H, Smith ZD, Bock C, et al.: Preparation of reduced representation bisulfite sequencing libraries for genome-scale DNA methylation profiling. Nat Protoc. 2011; 6(4): 468–81. PubMed Abstract | Publisher Full Text\n\nKrueger F, Andrews SR: Bismark: a flexible aligner and methylation caller for Bisulfite-Seq applications. Bioinformatics. 2011; 27(11): 1571–1572. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPedersen B, Hsieh TF, Ibarra C, et al.: MethylCoder: software pipeline for bisulfite-treated sequences. Bioinformatics. 2011; 27(17): 2435–2436. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarris EY, Ponts N, Levchuk A, et al.: BRAT: bisulfite-treated reads analysis tool. Bioinformatics. 2010; 26(4): 572–573. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen PY, Cokus SJ, Pellegrini M: BS Seeker: precise mapping for bisulfite sequencing. BMC Bioinformatics. 2010; 11(1): 203. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXi Y, Li W: BSMAP: whole genome bisulfite sequence MAPping program. BMC Bioinformatics. 2009; 10(1): 232. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangmead B, Trapnell C, Pop M, et al.: Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009; 10(3): R25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAkalin A, Kormaksson M, Li S, et al.: methylKit: a comprehensive R package for the analysis of genome-wide DNA methylation profiles. Genome Biol. 2012; 13(10): R87. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAssenov Y, Müller F, Lutsik P, et al.: Comprehensive analysis of DNA methylation data with rnbeads. Nat Methods. 2014; 11(11): 1138–1140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRitchie ME, Phipson B, Wu D, et al.: limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7): e47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHansen KD, Langmead B, Irizarry RA: BSmooth: from whole genome bisulfite sequencing reads to differentially methylated regions. Genome Biol. 2012; 13(10): R83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeng H, Conneely KN, Wu H: A Bayesian hierarchical model to detect differentially methylated loci from single nucleotide resolution sequencing data. Nucleic Acids Res. 2014; 42(8): e69. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHebestreit K, Dugas M, Klein HU: Detection of significantly differentially methylated regions in targeted bisulfite sequencing data. Bioinformatics. 2013; 29(13): 1647–1653. PubMed Abstract | Publisher Full Text\n\nSun D, Xi Y, Rodriguez B, et al.: MOABS: model based analysis of bisulfite sequencing data. Genome Biol. 2014; 15(2): R38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDolzhenko E, Smith AD: Using beta-binomial regression for high-precision differential methylation analysis in multifactor whole-genome bisulfite sequencing experiments. BMC Bioinformatics. 2014; 15(1): 215. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCarthy DJ, Chen Y, Smyth GK: Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucleic Acids Res. 2012; 40(10): 4288–4297. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVisvader JE: Keeping abreast of the mammary epithelial hierarchy and breast tumorigenesis. Genes Dev. 2009; 23(22): 2563–2577. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShackleton M, Vaillant F, Simpson KJ, et al.: Generation of a functional mammary gland from a single stem cell. Nature. 2006; 439(7072): 84–8. PubMed Abstract | Publisher Full Text\n\nLangmead B, Salzberg SL: Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012; 9(4): 357–359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, Oshlack A: A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biol. 2010; 11(3): R25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDu P, Zhang X, Huang CC, et al.: Comparison of Beta-value and M-value methods for quantifying methylation levels by microarray analysis. BMC Bioinformatics. 2010; 11(1): 587. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFu NY, Rios AC, Pal B, et al.: EGF-mediated induction of Mcl-1 at the switch to lactation is essential for alveolar cell survival. Nat Cell Biol. 2015; 17(4): 365–75. PubMed Abstract | Publisher Full Text\n\nChen Y, Lun AT, Smyth GK: From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline [version 2; referees: 5 approved]. F1000Res. 2016; 5: 1438. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen Y, Pal B, Visvader JE, et al.: Data and code for “Differential methylation analysis of reduced representation bisulfite sequencing experiments using edgeR” [Dataset].. Zenodo. 2017. Data Source"
}
|
[
{
"id": "28483",
"date": "07 Dec 2017",
"name": "Simon Andrews",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nChen et al present an interesting re-application of the EdgeR analysis package to the analysis of bisulphite sequencing data. The method they propose would utilise the existing negative biomial models within EdgeR and would potentially provide the power which comes with the linear model framework to bisulphite data. The method described requires no changes to EdgeR itself, and merely describes a suitable formulation of design matrix to allow this to be applied to bisulphite data.\nThe article is generally well written and the authors go to great lengths to break down and describe the method. They also provide a page from which all of the underlying data and code can be obtained and I was able to reproduce the results, and independently verify them in a parallel analysis.\nThe main thing which I struggled with was some of the detail in the description of the method itself. There were some parts which I wasn't clear on, and some nomenclature which didn't help in understanding the explanation. I'll try to lay out my concerns below:\n1) In the small example I completely understand that the authors wanted to keep this as simple as possible, but it might have helped to have had 2 samples per condition so that the full complexity of the method is visible.\n\n2) There is a typo in the code for the small example so it doesn't run as is. The list function on line 2 has an extra bracket at the end.\n3) The nomenclature in the small example is inconsistent. You have samples 1 and 2, but (in the table) also conditions 1 and 2, but in the code the conditions are A and B. If you had Samples 1,2,3,4 in conditions A and B this might help to alleviate some of the confusion.\n4) In the small example description you say that A_MvsU estimates the log ratio for Sample1, but it wasn't clear to me why this would apply to only Sample 1 since the factor has a 1 against the meth count for both samples 1 and 2\nIn the expanded examples there were also some points on which I wasn't clear.\n5) You calculate a single dispersion parameter for all data points and say that in contrast to RNA-Seq there is no global trend to follow. It wasn't clear to be exactly why this is since read count and methlyation level would all affect the dispersion - is it simply because these factors are explicitly accounted for in the linear model?\n6) In the design matrix for the RRBS it wasn't clear why the first column was all 1s, whereas the rest obviously matched the condition from which they came. This also contrasted with the simple example where the structure wasn't like this. Is this because you were comparing both P7 and P8 to P6?\n7) I think this is possibly the same thing as point 4, but you say that the Me column represents the methylation level in P6, but again this highlights the methylated values in all samples, so why only P6?\nFor the final results obtained it would have been nice to show the general level of concordance with running the same analysis through one of the beta-distribution models to either show general agreement, or to generally explain any major differences.\n\nMinor points:\nIn the introduction you say that \"40% of mammalian genes and 70% of human genes have CpG islands enriched in their promoter regions\". Enriched probably isn't the right word to use (or you need to say that CpGs are enriched rather than islands). The difference between 'humans' and 'mammals' is also somewhat contentious - non-human mammals certainly have weaker CpG islands which get missed by CpG island prediction tools, but for example in mouse Illingworth et al showed that if you use CpG binding protein ChIP that you can see about the same number of islands in both species.\nIt's also not really fair to say that CpG methylation in promoters is \"generally\" associated with repression of transcription. There is a categorical expression level shift associated with the presence/absense of CpG islands, but you can make a Dnmt1 knockout which removes pretty much all methlyation from the genome and for the vast majority of genes their transcription is completely unaffected.\nP3 \"with large genome\" should be \"with large genomes\" P5 \"mythylated\" should be \"methylated\"\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Partly\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "4024",
"date": "08 Oct 2018",
"name": "Gordon Smyth",
"role": "Author Response",
"response": "Thank you for your thoughtful comments on our article. We have rewritten the Introduction to take account of your comments and have fixed all typos. We have also expanded the expository section on the NB linear modeling approach to five pages instead of one. The purpose of the first toy example (Table 1) is to introduce the key mathematical idea that underlies our article, which is the use of linear model terms to account for read coverage at each CpG, and to relate this to the more familiar beta-binomial modeling approach. This idea is already manifest in the very small dataset without replicates, making the one-sample example the most fundamental and effective context in which to introduce the idea. We agree however that the condition labels in Table 1 should be A and B. Following your remarks, we have included a second example with 2 samples per condition (Table 2). Keep in mind though that the edgeR approach to methylation however is very powerful and flexible and its full complexity is still far from being visible in a single locus example like this. To see more of the features in play, one has to work through the full data example. We have however taken the opportunity to preview more ideas in the expanded linear modeling introduction, including dispersion estimation and the use of contrasts. You asked about the A_MvsU coefficient computation. This is a standard linear modeling result in R but one that does confuse a lot of people when they see it for the first time. You will see analogous results explained in the limma and edgeR User’s Guides. Anyway, we have rewritten the first example in a different way so that the coefficients are defined more explicitly and their calculation is explained in full arithmetic detail. The original implicit formulation of the logFC is now presented in the section “Another way to make the design matrix”. Consider the last two columns of the design matrix in that section, which are now called “Intercept” and “ConditionB”, and write βA and βBvsA for the corresponding two coefficients. The interpretation of these coefficients becomes clear by multiplying the coefficients by the design matrix. For samples corresponding to condition A, the methylation level is represented by βA alone because the last column of the design matrix is zero (rows 1 and 3). When condition B is applied, the methylation is modelled by βA + βBvsA (rows 5 and 7). So it is apparent that βA must represent the methylation status for condition A while βBvsA0 must be the difference in methylation between B and A. Users don’t need to follow this mathematics, we just present it here for completeness. You also asked about the first column of the design matrix. It doesn’t matter whether the first column is all 1s or not, as long as the leading columns of the design matrix span the sample effects. We have added a small example on pages 3-4 to demonstrate this. The first few columns of the design matrix model the sample coverages, and these columns are quite independent of the remaining columns which model the treatment conditions. We can compare the treatment conditions in any way we wish regardless of how the design matrix is parametrized. We have added some material to the introduction to try to explain this but, in any case, it isn’t something that a user needs to worry about. In this revision, we have introduced a new function modelMatrixMeth to create the design matrix automatically, so users now only need to focus on the treatment conditions, not on how to adjust for the sample effects. There is no reason why the dispersion should be a function of read count or of methylation level. The purpose of the NB statistical model is to capture the technical mean-variance trends so that the dispersion parameter can be interpreted independently of these things. We have added a sub-section on dispersion estimation to page 7 to try to make this clear. Regarding concordance with other methylation analysis software, we do not know of any beta-binomial modeling software that is able to conduct an equivalent analysis to that presented in our article. We have formed a contrast between luminal cells and the two basal populations, and no other software can do that as far as we know. This is an example of the extra flexibility provided by our approach. The DSS Bioconductor package is documented to be able to form general contrasts between treatment conditions, but this is not effective in practice because it does not distinguish hyper from hypo methylation for a contrast. Our experience is that edgeR gives better DM results than beta-binomial software but it would be unfair for us to claim that in our article without undertaking a full comparison study with rigorous benchmarking. Our plan is to publish such a comparison elsewhere."
}
]
},
{
"id": "28484",
"date": "18 Dec 2017",
"name": "James W. MacDonald",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is primarily a software pipeline article, showing how to use the Bioconductor edgeR package to analyze RRBS data, but to a certain extent is also a methods paper, as to my knowledge this is the first proposal for directly analyzing count data rather than converting to either ratios and using a beta-binomial, or to logits and using conventional linear modeling. This is an interesting idea, and should be explored further, but for this manuscript the main goal is to present the software pipeline.\nThe authors progress through each step of the pipeline, clearly describing each step as well as providing code (and links to the underlying data), so readers can easily understand the process and get some hands on experience as well.\nThe code is clearly written, and as straightforward as one could expect for a relatively complex analysis. However, I would prefer to see more consistent integration with other Bioconductor packages. In particular, when reading in the raw data, the authors use a clever trick to account for the fact that not all samples have reads for the same genomic positions. This step could just as easily be accomplished using the Bioconductor GenomicRanges package, which is intended for manipulating genomic data. In fact, the authors use GenomicRanges later in the pipeline to subset the methylation data to just gene promoter sites, so it would be more natural to start with a GRanges if you will need one later anyway.\nOtherwise this is a good article that clearly shows how one could use an innovative method to analyze RRBS data using the edgeR package.\nTypos: Under a small example section, (BvsA_MvsU) estimates the difference in logit proportions of mythylated\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "4026",
"date": "08 Oct 2018",
"name": "Gordon Smyth",
"role": "Author Response",
"response": "Thank you for your positive report on our article. You’re right, there is a novel methods proposal in our article, which we will seek to justify in more detail in a methodological article elsewhere, but the current article concentrates on the software pipeline.In this revision, we have introduced new functions readBismark2DGE() and nearestTSS() to simplify the workflow. These functions allow us to eliminate from the workflow any code chunks that involve substantial programming, making the workflow considerably easier for users. We felt the original code chunk to find CpGs in promoters in particular was too complex for the purposes of our article and assumed too much user knowledge.I have used GenomicRanges for other projects ( https://f1000research.com/articles/4-1080 ) but it doesn’t help with the current workflow. The file reading code would be similar in length and speed with or without GenomicRanges, so we tend to give preference to R base. In any case, we have eliminated that code entirely, as explained in our discussion with Peter Hickey.The new nearestTSS() function is more informative than the old code, as well as being fast. It has the added advantage of using the Bioconductor organism package, which has more up-to-date TSS annotation than the transcript package."
}
]
},
{
"id": "28485",
"date": "20 Dec 2017",
"name": "Peter F. Hickey",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nChen et al. propose a novel use of negative binomial generalized linear models (GLMs), as implemented in the edgeR software, to test for differential methylation from bisulfite-sequencing data, particularly reduced representation bisulfite-sequencing (RRBS). By leveraging the existing edgeR software, a popular tool in the analysis of diffential gene expression analysis from RNA-seq data, this method is immediately able to handle complex experimental designs and integrates with downstream analysis tools such as gene set tests provided by the limma software. The paper is well-written and I was able to reproduce the authors’ analysis. It will be a useful workflow for people needing to develop an analysis of RRBS data.\nLike Simon Andrews, I initially struggled a little with some of the detail of the method itself. The method’s elegance and power, like all those based on (generalized) linear models, is driven by careful formulation of the design matrix and the choice of contrasts. Necessarily, the design matrix for analysing bisulfite-sequencing data is more complex than that used to analyse RNA-seq data from an identical experimental design. As I know the authors are well aware, getting the design matrix and contrasts correct is 95% of the battle for most people analysing data with edgeR and limma. I will explain my concerns below (many are the same as raised by Simon in his review).\nMain points\np4: The initial example has no replicates. Since the method is designed for “any BS-seq data that includes some replication”, should this example include replicates? I appreciate the desire to keep the initial example simple (especially in light of my next comment). p4: I initially found the design matrix confusing. In fact, I had the same reaction/interpretation as Simon Andrews 4th comment: “In the small example description you say that A_MvsU estimates the log ratio for Sample1, but it wasn’t clear to me why this would apply to only Sample 1 since the factor has a 1 against the meth count for both samples 1 and 2”. I had to manually check a few quantities to convince myself, e.g., to rounding error, `coef(fit)[, \"A_MvsU\"]` is `logit((2 + prior.count) / (2 + prior.count + 12 + prior.count)`, where `prior.count = 0.125`. Because so much depends on constructing the appropriate design matrix, this description/section may warrant further explanation (e.g., comparing to some manually computed quantities). Like James MacDonald, although the code was clearly written, I was a little surprised that it didn’t use more consistent integration with existing Bioconductor packages and data structures. To add to his example, almost all the work in the section ‘Reading in the data’ can be achieved with `bsseq::read.bismark(fn)`, which will: read in an arbitrary number of Bismark `.cov.gz` files, appropriately combine samples with different sets of CpGs, and return a SummarizedExperiment-derived object (a BSseq instance) which could readily be used to construct the DGEList used in the analysis. In my experience, loading the data and combining different sets of loci is a step fraught with danger of hard-to-track-down errors, so it may be better to advise workflow users to use a fairly well-tested function. Full disclosure: I am the author of `bsseq::read.bismark()`. p14: The aggregation of CpGs to promoters may lead to surprising results. An (extreme) example: the first half of a promoter is methylated in one condition and unmethylated in the other, and vice versa for the second half of the promoter. In aggregate over the promoter the proportion of methylated CpGs may be similar in both conditions, yet this promoter is clearly differentially methylated. I think a note encouraging workflow users to think carefully about their hypothesis when doing this form of aggregation is warranted.\n\nMinor points\np1: “The most commonly used technology of studying DNA methylation is bisulfite sequencing (BS-seq)”. The Illumina 27k/450k/EPIC microarrays are the most commonly used ‘genome-wide’ assays for studying DNA methylation. However, (whole genome) BS-seq is arguably the gold standard genome-wide assay. p3: I think there’s some confusion about CpGs and CpG islands (CGI). Approximately 0.9% of dinucleotides in the human genome (hg19) are CpGs, and approximately 0.7% of the genome is a CGI (using UCSC CGIs, which is not the only definition but perhaps the standard); see code below:\n```R library(BSgenome.Hsapiens.UCSC.hg19) hg19_size <- sum(as.numeric(seqlengths(BSgenome.Hsapiens.UCSC.hg19)[\n\npaste0(\"chr\", c(1:22, \"X\", \"Y\"))]))\n# CpGs on chr1-22,chrX,chrY in hg19 n_CpGs <- Reduce(sum, bsapply(BSParams = new(\"BSParams\",\n\nX = BSgenome.Hsapiens.UCSC.hg19,\n\nFUN = countPattern,\n\nexclude = c(\"M\", \"_\")),\n\npattern = \"CG\")) 100 * n_CpGs / hg19_size\n# CGIs in hg19\n\nlibrary(rtracklayer) my_session <- browserSession(\"UCSC\") genome(my_session) <- \"hg19\" cgi <- track(ucscTableQuery(my_session, track = \"cpgIslandExt\")) sum(width(cgi)) / hg19_size ```\np3: Possible type, “with a large genome” p3: “WGBS is more suitable for studies where all CpG islands or promoters across the entire genome are of interest.” Might also add ‘distal regulatory elements’ and CG-poor regions (RRBS targets CG-rich regions of the genome). p3: BSmooth (implemented in bsseq) doesn’t use Empirical Bayes although it does use limma for linear regression p4: Missing a `library(edgeR)` in order for the code to work p4: There’s an extra parenthesis at the end of line 2 when constructing `dimnames(counts)` p4: The authors note that the method is “especially appropriate for RRBS data”. Is the main challenge for running on WGBS data that of computational resources? p5: Typo, “mythlyated” should be “methylated” Table 1: Condition should be ‘A’ or ‘B’ instead of ‘1’ or ‘2’ p6: Was Bowtie1 or Bowtie2 used as the Bismark backend for the mouse data? p8: The filtering step removes almost 90% of CpGs. Is this unavoidable, e.g., due to low sequencing coverage of these samples, or might the filtering be relaxed? Figure 1: Any thoughts for why the P8_6 sample is rather separated from the other Basal samples along dim2 of the MDS plot? Figure 2: What is the meaning of ‘average abundance of each CpG site’? Is ‘abundance’ interpretable as ‘sequencing depth’? p16: Possible typo, “Suppose the comparison of interest is the same as before” p22: In the DNA methylation literature, ‘up-methylated’ is typically called ‘hypermethylated’ and ‘down-methylated’ is typically called ‘hypomethylated’.\n15th July 2019: The competing interests section has been updated to reflect Dr Hickey’s previous University of Melbourne affiliation.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "4025",
"date": "08 Oct 2018",
"name": "Gordon Smyth",
"role": "Author Response",
"response": "Thank you for your thoughtful and positive comments on our article.We particularly wish to thank you for your comments on the introductory material, which we have rewritten to take account of all your suggestions. We have specified the version of Bowtie, adopted the “hyper” and “hypo” terminology and made all the other corrections.In this revision we have added a lot of new material to the expository section, including an example with replicates, while retaining the first example without replicates. The purpose of the first example is to introduce the key mathematical idea that underlies our article, which is the use of linear model terms to account for total read coverage at each CpG, and to relate this to the more familiar beta-binomial modeling approach. This idea is already manifest in the very small dataset without replicates, and so that is the most fundamental context in which to exposit the idea. The need for replicates relates to dispersion estimation, which is a separate issue. For pedagogic purposes it is best to isolate these concepts separately.We now demonstrate in the first example how the logFC is computed in complete arithmetic detail. We have switched the toy examples to use the group-mean parametrization instead of the treatment-parametrization so that the computation of logFC is explicit rather than implicit. See also our reply to Simon Andrews.We have introduced a new function modelMatrixMeth() to automate the construction of the design matrix. From a user point of view, the design matrix is now made exactly as it would be for an RNA-seq analysis, with the sample effects added automatically. We have also added a new section “Analogy with paired-samples expression analyses” to show that the entire linear modeling approach is equivalent to an (already well understood) type of RNA-seq experiment.The purpose of our article is to show that methylation data can be analyzed exactly like RNA-seq and we use the same packages and data structures as our earlier RNA-seq workflow article (Chen et al, 2017). Adding extra packages and data structures would complicate rather than help our workflow.We are aware of your read.bismark() function in the bbseq package. When run on our data, read.bismark() produces a number of alarming warnings suggesting that incorrect genome annotation has been used for some of the samples. These warnings are false alarms, but would worry users, so we would have to either suppress them or explain them to readers, were we to adopt read.bismark.We are also aware of the readBismark() function in the BiSeq package and the methRead() function in the methylKit package. It seems that every Bioconductor package that analyses BS-seq has its own read function. The functions read the methylation data into a binomial format, whereas the whole point of our article is to show that the methylated and unmethylated counts can be analysed as they are instead of doing a binomial analysis. All produce data structures that are incompatible with edgeR. We could wrangle the data formats back to a DGEList, but the end result would be code that is no shorter or simpler than our original plain base R code. In fact the code would be harder to follow because of the extra packages and data structures involved.It seems unfortunate that the Bioconductor methylation packages all create their own read functions from scratch in order to create their own specialist S4 objects. Each defines a complex S4 class, unique to that one package. If Bioconductor provided a fast, light-weight, base-R style function to read Bismark files, then all the packages could use that as building block instead of re-writing from scratch.In this revision, we have decided to create a new function readBismark2DGE() to read the Bismark files directly into a DGEList object, substantially simplifying our workflow for users. The new function is faster than any existing options and over 3 times as fast as read.bismark() on the workflow data. The new function creates simple list output with minimal processing and so serves the purpose of a light-weight base-R style function.You comment on aggregation over promoters and guidance about hypotheses. We have added more text to clarify that aggregation of counts over gene promoters tests for overall changes only, but we think that users would have expected this already and shouldn’t be surprised.The thought example you give is indeed extreme and does not match real that we have seen. Even if such a phenomenon could occur, our workflow includes a CpG-level analysis, so readers can easily check for CpGs changing in different directions within a promoter if that is of interest to them. We did this check ourselves and found that for 99% of genes whose promoters contain any DM CpGs, all the DM CpGs change in the same direction. In almost all the remaining cases, only one CpG changes in the “wrong” direction. The only exception is Mir1298 on Chromosome X that has two CpGs changing in the other direction to the majority.Even if there were opposing effects within a few gene promoters, it wouldn’t invalidate our aggregate analysis. It would simply mean that exceptional genes would have to analyzed and interpreted individually to understand the fully story, and we see no problem with that.We think there is a role for both aggregate analyses and high-resolution analyses of DNA methylation. However we think that aggregate analyses are under-used in the literature and should be encouraged more than at present. We think that genomic analysts sometimes try to accommodate too much complexity, leading to analyses that lack statistical power. This in turn leads to the danger of cherry picking, possibly over-interpreting results that might not be reproducible. Aggregate analyses ignore some of the complexity, but can lead to more reliable and interpretable “big picture” messages. In this article, our promoter region aggregated analysis leads to a clean and reliable correlation of methylation with RNA-seq results, a correlation that is more difficult to achieve by other means.Our remark that our method is especially appropriate for RRBS is prompted by the thought that researchers engaged in a WGBS study may have slightly different aims, and this goes back to your comment about hypotheses. In particular, it may be important in a WGBS analysis to discover DMRs in a de novo fashion, pooling results for consecutive genomic loci without the use of gene annotation. We have added remarks in the Introduction and the Discussion to clarify our aims and to briefly indicate possible extensions. There is no problem with computational resources – our pipeline is reasonably efficient.In this revision, we have relaxed the filtering threshold to 8 instead of 10 reads per CpG, although it makes no material difference to our analysis. While the filtering does indeed remove 85% of the CpGs, it keeps more than half the counts. The number of CpGs retained in the analysis is plenty for our purposes.In this revision, we form the MDS plot in a slightly better way and P8_6 is no longer an outlier.In Figure 2, the meaning of “average abundance” is explained by the axis label as log count per million, i.e., coverage for that CpG divided by sequencing depth. In effect, it is the coverage for each CpG relative to other CpGs. It is independent of sequencing depth."
}
]
}
] | 1
|
https://f1000research.com/articles/6-2055
|
https://f1000research.com/articles/7-1605/v1
|
05 Oct 18
|
{
"type": "Review",
"title": "The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future",
"authors": [
"Lesley A. Schimanski",
"Juan Pablo Alperin"
],
"abstract": "Review, promotion, and tenure (RPT) processes significantly affect how faculty direct their own career and scholarly progression. Although RPT practices vary between and within institutions, and affect various disciplines, ranks, institution types, genders, and ethnicity in different ways, some consistent themes emerge when investigating what faculty would like to change about RPT. For instance, over the last few decades, RPT processes have generally increased the value placed on research, at the expense of teaching and service, which often results in an incongruity between how faculty actually spend their time vs. what is considered in their evaluation. Another issue relates to publication practices: most agree RPT requirements should encourage peer-reviewed works of high quality, but in practice, the value of publications is often assessed using shortcuts such as the prestige of the publication venue, rather than on the quality and rigor of peer review of each individual item. Open access and online publishing have made these issues even murkier due to misconceptions about peer review practices and concerns about predatory online publishers, which leaves traditional publishing formats the most desired despite their restricted circulation. And, efforts to replace journal-level measures such as the impact factor with more precise article-level metrics (e.g., citation counts and altmetrics) have been slow to integrate with the RPT process. Questions remain as to whether, or how, RPT practices should be changed to better reflect faculty work patterns and reduce pressure to publish in only the most prestigious traditional formats. To determine the most useful way to change RPT, we need to assess further the needs and perceptions of faculty and administrators, and gain a better understanding of the level of influence of written RPT guidelines and policy in an often vague process that is meant to allow for flexibility in assessing individuals.",
"keywords": [
"promotion",
"tenure",
"incentives",
"academia",
"higher education",
"publishing"
],
"content": "Introduction\n\nThere is some question as to whether the academic system, and its means of evaluating the worth of its faculty’s contributions, has kept pace with the rapid evolution of technology and communications (e.g., Genshaft et al., 2016; Howard, 2013; Piwowar, 2013; Sanberg et al., 2014), as well as with societal goals such as ensuring equal opportunities for employment and career advancement regardless of gender, ethnicity, or other personal characteristics (e.g., Johnsrud & Jarlais, 1994; López et al., 2018; Menges & Exum, 1983; Whittaker et al., 2015). Some common complaints about academia, including those focused on lack of reproducibility (Open Science Collaboration, 2015), problems with peer review (Ross-Hellauer, 2017; Smith, 2006; Tennant et al., 2017), and the lack of access to research, could conceivably be reduced by building mechanisms that capitalize on freely available online communications and information-sharing tools. There is a clear desire by some, if not many, to make changes to the ways in which we organize academic activity and the dissemination of its products (e.g., Ellison & Eatman, 2008; O’Meara, 2014; O’Meara et al., 2015; Sid & Richardson Foundation Forum, 1997). However, there are barriers to change.\n\nChief amongst these barriers are the incentive structures currently in place for faculty career advancement. As Buttliere (2014) pointed out, “the problem is an ineffective reward system which makes doing the prosocial action … bad for the individual because it less efficiently achieves high impact work and thus promotion” (p. 1). To address the problems of academia, and conceptualize how its scholarly communication system might be improved to promote the active sharing of information and support a more efficient and transparent approach to conducting research, it is essential to understand the explicit rules of the game. To aid in this understanding, this paper offers a synthesis of the literature on review, promotion, and tenure (RPT) practices in the United States and Canada.\n\n\nIdentifying RPT issues and areas for reform\n\nThe literature shows that there has been discontent, especially on the part of faculty but also in administrators, regarding the methods used to evaluate faculty for tenure and promotion (e.g., Diamond & Adam, 1998; Gordon, 2008; Harley et al., 2010). The major concerns about the RPT process can be grouped into two themes. Firstly, many faculty experience a dissonance between the apparent focus of RPT evaluation on research and publication, versus their actual work responsibilities, which often result in spending over half of their work hours on teaching (e.g., Diamond & Adam, 1998). Some also spend a great deal of time on service activities, which are barely recognized in the RPT process (e.g., Foos et al., 2004; Mamiseishvili et al., 2016). Secondly, faculty are concerned about the amount or type of publishing that is expected of them, the way their published works are assessed, and that the venues in which they are expected to publish (i.e., prestigious international and national journals, and university presses) don't have the capacity to support the amount of publication that universities want from their faculty (e.g., Adler et al., 2009; Brembs et al., 2013). Both of these factors can lead to frustration for university faculty.\n\nThat both of these concerns involve the publishing of research is not a coincidence: the challenges in scholarly communications and those of career advancement are intricately linked. To reduce the incongruence experienced by those wanting to both appropriately communicate their scholarship and advance successfully in their careers, it is necessary to understand the process that rewards activities within academia, especially as it pertains to publishing practices. A greater understanding of the RPT process may reveal an effective and efficient means for change. Changing the RPT process might lead to a reduction of the reliance on publication prestige or easily manipulated citation metrics, a restructuring of the peer review system, and even to an improvement in the quality, affordability, and flexibility of format in publishing venues. We begin by examining research that discusses how research, teaching, and service are viewed in RPT, and how the process places an emphasis on research and publications.\n\n\nResearch, teaching, and service in the review process\n\nIn general, candidates for tenure and promotion are judged based on their research and publications, teaching effectiveness, and service. Although explicit weights for each aspect are not typically provided in RPT guidelines and policies, most faculty, across disciplines, assume that a strong research and publication record is necessary, and lack thereof cannot be compensated for by excellence in teaching and service (Green, 2008; Harley et al., 2010; Youn & Price, 2009). This pattern has remained consistent since at least the 1990s.\n\nTenure and promotion requirements have changed over time: in the 1980s most university departments wanted to see excellence in at least one of research, teaching, or service (Gardner & Veliz, 2014), and then a shift occurred in which excellence in teaching and service was no longer sufficient to earn tenure (Youn & Price, 2009). By the 2000s, excellence in all three was expected with the most focus placed on research. This trend began to elicit concerns: in a 1989 survey of 5000 faculty (from two- and four-year U.S. academic institutions, across a spectrum of different disciplines), 68% agreed with the statement, “At my institution we need better ways, besides publications, to evaluate the scholarly performance of the faculty” (The Carnegie Foundation for the Advancement of Teaching, 1989; p. 52). Although 71% of these faculty preferred teaching to research, publishing was considered a dominant factor in determining faculty career success (The Carnegie Foundation for the Advancement of Teaching, 1989). Similarly, in 1989-1990, the Higher Education Research Institute surveyed over 35 000 faculty who taught undergraduate courses, representing all categories of higher education institutions in the U.S., and 44% of faculty at public universities felt that institutional demands for research productivity interfered with their ability to teach effectively (Astin et al., 1991).\n\nFairweather (1993) analyzed surveys of over 4000 faculty in four-year colleges and universities and found that research productivity best predicted success in promotion, tenure, and salary increases across institution types and disciplines. Teaching was rarely a contributing factor to RPT success, and, in some cases, salary appeared to be negatively influenced by teaching hours. Commentary at the time reflected these ideas: an article appeared in The Chronicle of Education entitled “Teaching Awards: Aid to Tenure or Kiss of Death?” and another article commented, “Some professors … regard the Teacher of the Year Award as the kiss of death … I personally know three different professors at three different institutions who have gotten the Teacher of the Year Award and were then told that their contracts would not be renewed” (Sowell, 1990, p. 69).\n\nSome have theorized that faculty focus on research is necessary for the advancement of knowledge generation and thus should be the most important and valued aspect of an academic career (Zuckerman & Merton, 1972). Correspondingly, an investigation in 49 research and doctoral universities in 1991-2 revealed that faculty, chairs, and deans found their institutions focused strongly on research, but the respondents also stated they would prefer more balance between teaching and research (Diamond & Adam, 1998). Interestingly, those in each position viewed those in the other positions as perpetuating the bias towards research more than their own group. A follow-up study in 1996-7 surveyed 11 of the same institutions originally studied in 1991-2 (Diamond & Adam, 1998). The follow-up study indicated a significant shift in priorities at research universities, with stronger support for balance between teaching and research in all three employee groups – it was perceived that teaching was indeed receiving more weight towards RPT than in the past. However, open-ended comments in the 1996-7 responses indicated that although a shift had occurred, policies for RPT still rewarded research more than teaching, and allocation of university resources still favored research over instruction as well.\n\nIn Tang & Chamberlain’s (1997) study of regional universities, administrators thought teaching is a crucial and rewarded activity of faculty (see also Sid & Richardson Foundation Forum, 1997), but faculty perceived that only the research component of their job requirements was actually rewarded (see also Wolfgang et al. 1995, for similar findings in pharmaceutical faculty). Although administrators agreed regarding the importance of research, faculty can experience a disconnect in that they feel teaching is not valued in the reward system although it is an expected activity (Tang & Chamberlain, 1997). Wolfgang et al. (1995) suggested RPT policies should more accurately represent the investment faculty make in both teaching and research, with the goal of validating effort and recognizing success in both capacities.\n\nGordon (2008) gave examples of faculty role conflicts, such as these comments from an assistant professor at a research university: “As a small, private university, this organization has aspirations of more research-focus. While we are supposed to focus on teaching, a colleague recently failed to receive tenure for lack of publications. This indicates to me that we are expected to produce research regardless of the school's expectations of teaching” (p. 32). Gordon observed that many faculty respondents to her survey felt tension between their roles as teacher and researcher, and developed ways to cope with this stress. Some actively gave preference to one role or the other, and others worked on their research during vacation time to meet their tenure requirements. And, the observations reported in Gordon’s study differed based on the type of institution they were collected from: faculty at research institutions reported RPT process prioritization on research-related activities, and less on teaching-related activities. Faculty at teaching institutions reported the opposite. However, faculty at hybrid institutions (those that equally value teaching and research) perceived that research was valued more than teaching, just like at research-focused institutions.\n\nA study of information systems faculty in the late 1990s also reported tension between research, teaching, and service activities (Whitman et al., 1999). Faculty from both teaching and research universities reported an overwhelming amount of service and administrative responsibility. Those at teaching institutions felt there are misconceptions that they have lower research expectations placed upon them. Rather, they feel immense pressure to publish research, often because their institution aspires to move up the ranks, which depends on its overall research productivity. Some reported feeling “victimized by this institutional pressure to achieve in research” (p. 108) alongside large teaching loads, and animosity towards their colleagues at research-oriented institutions who they believed didn't have to teach as much. On the other hand, faculty at research focused institutions expressed frustration at the assumption they don't value teaching. They reported “there has been a revival of focus on teaching … in research institutions,” and that teaching effectiveness is considered more strongly than ever in their evaluation procedures.\n\nWhereas the aforementioned studies asked the opinions of faculty, a survey of information science department chairs conducted at about the same time asked respondents to rate the importance of research, teaching, and service in their tenure and promotion decisions (Whitman et al., 1999). On a 10-point scale, research was rated 8.26, teaching 7.99, and service 5.31, showing that at least in some contexts, teaching and research are considered equally important. Several years later, Foos et al. (2004) reported that chairs of geoscience departments in the USA weighted teaching at 48%, research at 37%, and service at 14% in the RPT applications they evaluated. About three-quarters of department chairs rated both course evaluations and publication in national and international journals as crucial.\n\nDespite the views expressed by these samples of department chairs, faculty continued to rate teaching as undervalued in subsequent studies. At the University of Pittsburgh School of Education, May (2005) revealed a conflict between perception of the relative weights of teaching, research, and service towards tenure and promotion versus what faculty thought should be the actual weights. Faculty estimated the actual weights used were 65.6% research, 25.6% teaching, and 8.7% service. They thought the weights should be changed to 49.3% research, 37.3% teaching, and 13.5% service to reduce emphasis on research and increase that on teaching and service. Teaching was the main target for increased focus, and faculty thought research should still contribute half to the decision making process. A few years later, Harley et al. (2010) still observed a corresponding focus on research and publication in RPT at the expense of teaching and service. Similarly, van Dalen & Henkens (2012) reported that faculty in high publication pressure environments, as typically experienced in the US, perceived publication in top-rank journals is the strongest factor in determining academic success.\n\nAgain, despite department chairs strongly valuing teaching towards tenure and promotion in some contexts, there remains little career advancement value in the service aspect of a faculty career. Many RPT guideline documents provide lists of research, teaching, and service requirements, but it appears “some bullet points are more equal than others” (p. 269; Macfarlane, 2007). In other words, the requirements toward research and scholarship typically outweighed those pertaining to service contributions, even if explicit weights were not given in the documentation (Green & Baskind, 2007). For instance, University of Pittsburgh guidelines required faculty to document their service activities, but even after stating the importance of service in the evaluation, the School of Education guidelines elucidate that service on its own cannot compensate for a lack of distinguished achievement in scholarly activities such as teaching and research (May, 2005). Thus, service is necessary but not sufficient for promotion or tenure.\n\nHarley et al. (2010) came to similar conclusions from their study of research-focused universities: service and teaching “hold no weight” towards tenure and promotion in the absence of excellence in research and publication. More recent studies tend to agree. Academic pharmacy faculty in the USA raised the question of whether service was appropriately recognized in tenure review (Pfeiffenberger et al., 2014). Canadian faculty similarly reported that success in tenure review depended on research, not teaching or service (Acker & Webber, 2016). In fact, respondents reported that one’s teaching merely “need to not be horrible” (p. 239) and some even reported removing community service activities from the tenure review packages, or withdrawing from such activities altogether until after tenure. Further, some associate faculty express dissatisfaction because they are expected to devote more time to service, which takes time away from research activities that are more important for promotion (Mamiseishvili et al., 2016).\n\nSimilarly, women tend to spend more time in service roles, and because service is generally undervalued in RPT evaluation, women may be disadvantaged in career advancement (Guarino & Borden, 2017; Misra et al., 2011). Ethnic minorities (e.g., African Americans, Indigenous peoples) in faculty positions often face the issue of being called upon to serve on numerous institutional committees to fulfill diversity policy requirements (e.g., Henry & Kobayashi, 2017; Martinez et al., 2017; Ross & Edwards, 2016), leading also to more work time spent on service, taking time away from those activities valued more in career progression.\n\nThe emphasis on research and publication in the RPT process encourages faculty to focus on career advancement by conducting research of high visibility in academic circles, with less incentive to encourage dissemination of the findings to the public. Along these lines, it has been suggested that a new category be added to the RPT “trifecta” of research, teaching, and service. Harley et al. (2010) suggest this new category could include scholarly contributions that are generally not peer reviewed but aim to disseminate information to a wider audience, and could be considered a mid-point between service and research (see also Scheinfeldt, 2008). However, Harley et al. acknowledge that with few faculty including these types of contributions in their RPT packages, there is little in the way of guidelines or procedures in place for assessment. In addition, evaluating these additional materials could be time consuming and arbitrary, and the expectation for peer review may limit which contributions reviewers find meaningful.\n\n\nQuantity, quality, and prestige of publications for RPT\n\nIf it is clear that research and publications are presently the most important components of the review process, then what should academics focus on: quantity, or quality? Or is it about seeking prestige? Publications in the most prestigious venues are not necessarily those of the highest research quality; other factors such as the editors’ perceived novelty and importance of the findings also determine likelihood of acceptance for publication.\n\nSome aspects of the evaluation of publications in RPT appear to have remained relatively consistent over the past few decades. In the 1990s and early 2000s, several studies found that those evaluating faculty for promotion or tenure preferred to focus more on the quality of their research and impact of publications as opposed to the quantity of papers (Cronin & Overfelt, 1995; Estabrook & Warner, 2003). Department chairs liked to see “value,” “quality,” “legitimacy,” and “weight” in the publications (Andersen & Trinkle, 2001). Publishing in peer-reviewed journals was, and remains, a key to demonstrating research quality (Acord & Harley, 2013; Andersen & Trinkle, 2001; Cronin & Overfelt, 1995; Harley et al., 2010; King et al., 2006; Seipel, 2003) and the quality of the peer review offered by particular journals is also an important consideration (Andersen & Trinkle, 2001).\n\nDespite consistent value being placed on research quality and peer-reviewed publications, there is some concern that RPT research and publication requirements are gradually increasing, resulting in greater workloads and an imbalance between varied job responsibilities and the reality that faculty are expected to produce more papers and books than ever before. Estabrook & Warner (2003) provided evidence that standards for publishing in book-centric disciplines had increased based on the reports of faculty in History, English, and Anthropology departments who had received tenure. Similarly, academics in other disciplines feel pressure to publish particular numbers of articles due to RPT policy (Walker et al., 2010). There can be either formal, or informal and verbally communicated, expectations regarding the number of articles required for tenure or promotion. For instance, King et al. (2006) found that at UC Berkeley it is typical to need three or four peer reviewed articles per year to succeed in RPT applications in biostatistics and chemical engineering. To achieve full professor in chemical engineering, one needed about twenty papers in major journals as well as widespread and international recognition in their research specialty. Similarly, Foos et al. (2004) reported that 27% of US geoscience departments had guidelines regarding the number of publications needed to earn tenure: the requirement was 3.7 publications on average, with a range between one and twelve.\n\nHarley et al. (2010) found that across various disciplines, those assessing faculty for tenure or promotion were looking for numerous and exceptional publications that represent significant progress in their field of study, are deemed high in quality by both internal and external reviewers, and can be described as “groundbreaking,” “indicative of sustainable scholarship,” and “lauded by the larger community of scholars” (p. 7). It can be difficult to quantify exactly how many journal articles are necessary for tenure within and across different disciplines – the guidelines are not always specific, and can allow for some flexibility, especially in order to take quality into consideration.\n\nRegarding top-tier versus second-tier institutions, Harley et al. (2010) found that some faculty perceived second-tier institutions to have less stringent publication requirements. The list of acceptable journals and presses was thought to be more inclusive, fewer publications were needed, and more emphasis was placed on teaching. Similarly, economics department chairs revealed that more prestigious departments required more second-tier publications to make up for the lack of publishing in a top-ten journal (Liner & Sewell, 2009). However, some faculty in Harley et al.'s study thought that the requirements at top-tier research universities influenced the policies of lower-ranked institutions, with lower-ranked institutions attempting to move up the rankings by increasing their research presence.\n\nFaculty rank can influence the career advancement process from both the applicant and the evaluation sides. On the applicant side, the evaluation process can be qualitatively different for tenure applications versus applications for promotion to full professor. Harley et al. (2008); Harley et al. (2010) found that assistant professors applying for tenure feel pressure to publish only in high-impact, high-prestige venues, whereas associate professors may publish in more varied formats, even including encyclopedias or electronic resources. It may be associate professors applying for promotion who pave the way towards inclusiveness of different media in the RPT process – this group, having already been granted tenure, has a tendency to be more open-minded towards non-traditional forms of publishing (Harley et al., 2010). That being said, Liner & Sewell (2009) reported that in economics departments, those applying for promotion to full professor had to compensate more than those applying for tenure if they lacked publications in top-tier journals.\n\nThere is also evidence that shared authorship can influence the value placed on publications in RPT evaluations, which can be a cause for concern in fields of research that are increasingly collaborative in nature (e.g., Soares, 2015). Walker et al. (2010) found that journal article authors ranked journal impact factor, number of publications, and order of authorship as most crucial for tenure and promotion, whereas the number of authors on a paper was less of a concern. At some universities, only the first or corresponding authors received credit in the RPT process, whereas in other institutions, second and third authorship was rewarded. First or corresponding authors tended to benefit the most towards promotion, tenure, and/or financial compensation (Mahoney, 1985; Seipel, 2003; Wren et al., 2007).\n\nThere seems to be general agreement that scientific content and quality should be more important than the number of publications that are being evaluated. However, it is not always clear what constitutes a quality publication, and there is evidence that those who review RPT applications often do not directly evaluate the scientific merits of every publication listed. It is common to look at the venue of publication as a proxy for quality. This practice has been criticized, most notably in The San Francisco Declaration on Research Assessment (DORA; Cagan, 2013) and the Leiden Manifesto (Hicks et al., 2015); nonetheless, evidence for this approach can be found throughout the literature.\n\nThe practice of differentiating between peer-reviewed and unreviewed publishing mediums, with peer-reviewed being the clear preference, is one method to gauge quality that is relatively uncontroversial and unchanging in recent decades. For example, in a survey sent to chairs of information science departments in the late 1990s, 43.7% reported that all journal publications count towards tenure and promotion decisions, whereas 39.2% reported that only certain categories of journal publications count, such as those that are refereed and/or editorial reviewed (Whitman et al., 1999). Peer-reviewed journal articles are the main focus of evaluations in many fields, including astrophysics, biology, economics, business, psychology, women's studies, music, and some fields of political science (Coonin & Younce, 2009; Harley et al., 2010; Harley et al., 2008).\n\nAlthough determining whether a journal is peer-reviewed is fairly straightforward, RPT committees also make other distinctions that are less clear-cut. A common shortcut is to give different weights to different kinds of publications, such as those considered to be “top journals,” “prestigious,” “elite,” “impactful” or “international” (King et al., 2006; Seipel, 2003; Walker et al., 2010). Some academic institutions even reward faculty who publish in high impact journals (Nederhof, 2008; “The politics of science,” 2010). In geoscience departments at US universities, national and international journals scored highest at 1.22 on a scale of 1 to 5, with 1 being “Very Important” and 5 being “Not Considered” (Foos et al., 2004). Book chapters and highly specific or regional journals were rated about 2, and refereed electronic journals and symposium volumes around 2.3. Ratings lower than 2.5 were given to government publications, textbooks, lab manuals, field guides, and technical reports. In another instance, both specific numbers of publications and a qualifier were used: in the field of information systems, there was an expectation of at least four articles published in “elite journals” to earn tenure (Dennis et al., 2006). Taken together, these studies provide further evidence that, in terms of career success, faculty should aim to publish with as much prestige as possible, regardless of whether that represents the most appropriate medium for disseminating the work.\n\nThis evaluation strategy seems to also apply to fields that require faculty to write books or monographs as part of their tenure requirements, including music theory and history (Harley et al., 2010; Harley et al., 2008). Like with journals, there can be standards as to what types of books and publishers are the most valuable for tenure or promotion. For example, textbooks generally contribute less towards an application than does a scholarly monograph (Liner & Sewell, 2009). Peer-review and prestige are both of influence, with choice of publisher playing a crucial role. Books published by presses with editorial boards, or those that provide peer review of book submissions (e.g., members of the Association of American University Presses), are often weighted more heavily than those from commercial publishers (Thatcher, 2007). UC Berkeley administrators stated that “books should be published by prestigious university presses” (p. 54), with faculty understanding this is to ensure the book is scholarly and adheres to high standards (King et al., 2006). Although standards and expectations vary across institutions and fields, the studies cited above show a clear desire for rating or ranking the quality of a candidate’s contributions, something that seems to be done in large part based on the known reputation of the publishing venue (be it the journal or the publisher).\n\nPerhaps in an attempt to get away from the subjective nature of judging prestige, many departments have taken to using the Journal Impact Factor in assessing the value of publications towards RPT. A journal's impact factor is calculated using division: the numerator is the number of citations (in the current year) of articles published during the previous two years, and the denominator is the total number of articles published during those same two years (Garfield, 1999). The impact factor has been widely debated and criticized, not least because of its inappropriateness for judging the quality of individual articles or researchers. Despite the well documented critiques and adverse effects (e.g., Haustein & Larivière, 2015; Hicks et al., 2015; Larivière & Sugimoto, 2018), the importance of the impact factor to RPT was reported across all types of faculty positions and countries surveyed by Walker et al. (2010). Adler et al.’s (2009) confidential surveys provide examples of formulas that rely on impact factors to assess publications in RPT. One example reads:\n\n“My university has recently introduced a new classification of journals using the Science Citation Index Core journals. The journals are divided into three groups based only on the impact factor. There are 30 journals in the top list, containing no mathematics journal. The second list contains 667, which includes 21 mathematics journals. Publication in the first list causes university support of research to triple; publication in the second list, to double. Publication in the core list awards 15 points; publication in any Thomson Scientific covered journal awards 10. Promotion requires a fixed minimum number of points” (p. 10).\n\nA second example reads:\n\n“In our department, each faculty member is evaluated by a formula involving the number of single author equivalent papers, multiplied by the impact factor of the journals in which they appear. Promotions and hiring are based partly on this formula” (p. 10).\n\nThese examples illustrate that some institutions may see the impact factor as a convenient shortcut in assessing the research contributions of their faculty. Similarly, Malsch & Tessier (2015) report a journal rankings list used as part of their institution’s Research Incentive Policy applied in the context of determining career advancement. In this case, the authors’ field of study prohibited them from publishing in their institution’s top-ranked journals, leading to potential career consequences due to journal ranks largely based on Journal Citation Reports. Systems like this are even prompting evaluations of the usefulness of publishing in particular journals for the specific purpose of promotion and tenure (e.g., Janvrin et al., 2015).\n\nSuber (2010) criticized the practice of using “journal prestige and impact as surrogates for quality” (p. 119), suggesting that it is a time saver to determine whether the journals overall are high-impact or high-prestige rather than assess the actual articles. Suber acknowledged that promotion and tenure committees can't all be experts in the candidate's field and often have to assess numerous candidates, not allowing for sufficient time to evaluate materials with the depth required to determine research quality. Even bringing in the opinions of external reviewers and experts in the candidate's discipline doesn't necessarily help the issue. External reviewers don't always have a direct connection with the candidate and may evaluate based on the apparent prestige of their publication record and how well known they are in their field (Harley et al., 2010). Together, these factors suggest that evaluation of applications for promotion or tenure is a realm in which faculty may be over-stretched, which encourages use of the impact factor to gauge the quality of research publications as a way to ease workload. As a result, most faculty (e.g., 68% in medical fields) perceive journal impact factor as important to their performance review and promotion (Walker et al., 2010).\n\nIf impact factors do not provide adequate information for RPT, what other indicators may be considered in the RPT process? Some institutions assess faculty’s track record of securing grant funding as part of RPT evaluation. Liner & Sewell (2009) found that in economics departments in the USA, external competitive grants generally counted towards tenure or promotion, although the size of the grant was more important in the application for full professor than it was for tenure. Securing grants is also typically important for RPT in the sciences, including biology and astrophysics (Harley et al., 2010). And, Foos et al. (2004) found that 41% of geoscience departments in the USA require evidence of obtaining research funding in order to award tenure. However, this is not always the case. In one documented example, Duke University Medical School does not consider external funding for the promotion and tenure of clinical or basic science faculty (Nunez-Wolff, 2007).\n\n\nModern approaches to evaluating research output\n\nNumerous advances have occurred in scholarly communication over the last decades, some of which include online publication and databases, academic use of social networks, and analytic tools aimed at quantitatively assessing the reach of individual publications. Specific metrics have been developed that have the potential to reflect the influence of a candidate's publications in their field of specialty more accurately than the impact factor. Are such alternate citation measures considered in RPT evaluations?\n\nIndeed, some institutions have begun to consider additional citation metrics, such as counts per journal article, in their decision making process (Reinstein et al., 2011). Such citation searching may be required in the RPT application and it may not be an easy task for the candidate to carry out. The amount of support available from the university library varies across institutions (Dagenais Brown, 2014), although there are freely available online resources that provide guidance in choosing and interpreting scholarly literature metrics for different situations (e.g., http://www.metrics-toolkit.org). Indeed, some have predicted there will be a movement towards using alternative metrics (altmetrics) to assess the influence of research findings for RPT (Darling et al., 2013; Piwowar, 2013). Altmetrics can involve such measurements as views, discussion posts, or social media shares, of either the original research articles or other products that result from the research, such as datasets.\n\nThe idea of altmetrics is still quite new – the term itself was coined only in 2010 – and so the integration of these alternate measures of research communication with RPT processes remains in flux (Howard, 2013). Some view altmetrics as a potentially informative addition to RPT evaluations, but there are concerns regarding the value of the data. For instance, a low-quality publication in a broadly interesting, or new and exciting field of research may generate a lot of online “buzz”, whereas a high-quality publication in a niche field may attract far less attention. Accordingly, although Gruzd et al. (2011) found a majority (65%) of library and information science faculty agreed that online social media use should be considered in the tenure and promotion process, most were unsure of exactly how such professional social media use should be formally evaluated. Also, 73% of faculty in this study stated that online social communication tools have significantly influenced how they use traditional information sources. This widespread, but currently informal, use of social media (including forums like Twitter, Mendeley, and blogs) has become an integral part of how some academics stay informed on progress in their fields, and can even help to accelerate the pace of scientific discovery. Despite this, only a minority (12% of faculty) in the Gruzd et al. (2011) study reported that their tenure and promotion procedures acknowledged so-called alternate forms of scientific communication.\n\nAccordingly, there is little published evidence of RPT procedures directly acknowledging academic service involving outreach to the academic and public communities. In fact, Harley et al. (2008); Harley et al. (2010) found that across a number of disciplines at research-intensive institutions, pre-tenured faculty were encouraged to focus on high-impact publishing and not invest too much time on committee work, public engagement, or writing in non-traditional formats such as commentaries or blogging. Although raising scholarly visibility with blogs, working papers, or preprints may indirectly help a tenure application, Harley et al. (2010) reported that these items are not typically included in tenure applications, and may be considered neutral or even negative in the review process. Similarly, Goldstein & Bearman (2011) found little emphasis on community service or engagement in the RPT process at medical schools. In general, these types of activities, along with the sharing of unpublished work and using social media such as tweeting, haven't been valued by tenure and promotion committees but there is some indication this might begin to change (Fox, 2012; Gruzd et al., 2011; Piwowar, 2013).\n\nOne example of scholarly social media being considered in RPT evaluation is that of the Mayo Clinic – starting in early 2016, digital portfolios were allowed in evaluations for promotion (Cabrera et al., 2017). Cocchio & Awad (2014) reported that across medical, nursing, and pharmacy programs, deans have varying views regarding the value of social media in the evaluation of scholarly activity. 31% of these deans were of the opinion that high viewership of scholarly works increased academic merit, and 52% thought peer review of materials published online would also add value. It seems that the consideration of social media and altmetrics in RPT practice would be facilitated by implementing clear-cut structures for evaluation, and including the well-accepted trait of peer-review in assessing value.\n\nImportantly, Harley et al. (2010) found that engagement with the public is generally valued across disciplines and by institutions. There is recognition for faculty who facilitate public education or find other ways to give back to the public as a way to acknowledge taxpayer funding. However, attempts to become a public figure aren't without their risk. Traditionally, some departments view negatively those who attempt to popularize their research niche (“An interview with Aaron Barlow, editor of Academe, the magazine of the American Association of University Professors,” n.d.). And, some academics view high levels of public engagement as only appropriate for those who have already been granted tenure and are well known to academics in their field; faculty may garner criticism if their public persona is not balanced with significant research contributions (Harley et al., 2010). However, Aaron Barlow argues that any academic who has succeeded in having their work taken seriously by the public is likely to also be taken seriously in RPT (“An Interview with Aaron Barlow,” n.d.).\n\nIt has also been suggested that universities should shift to formally recognizing the translational value of academic research in the RPT process (Sanberg et al., 2014). In general, about half of faculty agree that the societal impact of one’s scholarly work should be a key RPT consideration (Wolff et al., 2016). Specifically, patents, licensing, and commercialization could be credited in order to encourage faculty to engage in use-oriented research that has the potential to positively affect society. Further, about 35% of faculty believe data should be credited equally to academic publications in RPT evaluations, and 37% believe software/code should be equally credited (Wolff et al., 2016). Sanberg et al. (2014) report that an apparent minority of US institutions have integrated these ideas into RPT policy, and changes in policy on this theme have likely been slow because they have been initiated primarily at the level of individual departments (bottom-up) rather than that of the institution (top-down).\n\n\nBeyond RPT guidelines\n\nTo reform the RPT process, it might be logical to begin by examining how the process has been instituted in formal guidelines. However, it seems that RPT guidelines can be unclear (Smesny et al., 2007), or purposefully vague, to allow for flexibility in each applicant's situation. Although promotion and tenure committees usually do attempt to use objective measures, in reality, the procedures, criteria, and weights used can vary between applicants and between departments (Claxton, 2005; Walker et al., 2010). Macfarlane (2007) observed that institutions typically don't specify weights to convey which of their tenure and promotion criteria are the most important. May (2005) found that all promotion and tenure documents from several research-focused universities addressed research, teaching, and service, but the language of the policy tended to be very broad as to allow for interpretation. All documents that May reviewed had specific requirements with regard to publication of research findings, but the expectations for teaching were less clear and more variable, and the definitions of service requirements were the most vague.\n\nFaculty rank and institution type may also affect the way one views the RPT process itself. Diamantes (2004) reported that tenured faculty perceived that the requirements were well communicated, but untenured faculty expressed a degree of uncertainty regarding the expectations. Estabrook & Warner’s (2003) study on Anthropology, English, and History departments, however, found no relationship between faculty age or tenure status and opinion on whether a book should be required for tenure. And, Gordon (2008) found that faculty at research and hybrid (research/teaching) universities report less confusion about publishing requirements than faculty at teaching universities. She provided different examples of faculty from research universities who had specific guidelines for publishing (e.g., six publications in six years) versus guidelines that were difficult to interpret, as one respondent wrote: “It is 45% of my responsibility allocation, but I’m not sure that tells the whole story. I think its more that I need to have quality or quantity of pubs. I’m not sure how they can translate that into a percentage” (p. 64). King et al. (2006) described the RPT process in chemical engineering at UC Berkeley as having vague and ambiguous written guidelines – even requirements for publication were not clearly stated.\n\nDespite ambiguous guidelines, faculty in King et al.’s (2006) study reported a clear understanding of how to succeed in career advancement, indicating the value of informal communication within the department in supporting its members. In survey responses, faculty expressed the opinion that vague requirements are understandable because the RPT process is “unquantifiable,” and that “if I'm doing my job right, tenure should come along with it” (p. 39; King et al., 2006). Acker & Webber (2016) similarly reported that in Ontario, Canada, many candidates found the rules for tenure criteria lacking in clarity. And, Prottas et al. (2017) found that faculty in the northeastern USA experienced a lack of clarity, and perceived unfairness, in their tenure criteria and in their institutions’ decision making processes. In the UC Berkeley Anthropology Department, it was acknowledged that the process for career advancement can be unclear to junior faculty, therefore it is the responsibility of the department chair to explain tenure expectations to new hires (King et al., 2006).\n\nHarley et al. (2010) also received reports of considerable flexibility in tenure and promotion judgement at research universities. Excellent quality in research and publication was most important and could override unwritten rules about the numbers of journal articles, books, or citations required. Special forms of scholarly evidence, such as the products of interdisciplinary research, creative pursuits, and many practices more common in the arts, can require special attention by reviewers. Harley et al. noted that RPT policy had built-in mechanisms to credit these types of activities as appropriate, and that each RPT application receives a great deal of attention in its adjudication. May (2005) concluded that the paucity of particular weights or values for any particular aspect of tenure or promotion applications leads to decisions being made by individuals and committees using their own “weighted judgement for each given criteria,” or by viewing all evidence together to make a prediction about the applicant's potential for making ongoing and substantial scholarly contributions.\n\nEstabrook & Warner (2003) also found evidence of tenure and promotion committees deviating from policy in making career advancement decisions in the somewhat variable disciplines of Anthropology, English, and History. Here, it is generally expected that faculty members will have published a scholarly book or monograph prior to making a tenure application. However, most official promotion and tenure guidelines indicate that either a book or a considerable number of substantial and peer reviewed publications may be accepted. When Estabrook and Warner interviewed 17 department chairs in these disciplines, the chairs consistently acknowledged the option given in the RPT guidelines, but stated that most faculty (with the exception of those in a few specific subfields) needed to publish a book to receive tenure.\n\nOne way to describe the relationship between RPT policy and the way the RPT process is actually carried out is to acknowledge the difference between institutional policy and departmental expectations. Institutional policy tends to be broad with many potential criteria for faculty to meet in order to earn career advancement. Departments may pick and choose from the institutional framework which criteria are their particular deal-breakers, and which items can be overlooked in favor of other candidate qualities and contributions. This, of course, can also lead to differences in the RPT process not only between institutions, but between departments within the same institution, as reported by Andersen & Trinkle (2001).\n\nJust as departments within an institution can vary in their RPT practices, so can departments of the same discipline across different institutions. For instance, Liner & Sewell (2009) surveyed 125 economics department chairs regarding their consideration of applications for both tenure and advancement to full professor, and found variability between them in the degree of reduction of credit for paper co-authorship. Reports from faculty in the field of English-language literature echo this theme, with one faculty member in King et al.’s (2006) study stating that the norms for advancement in the field “vary wildly” (p. 23). Although the general opinion was that across institutions the differences were substantive, faculty were clear on what was required within their own institution. Overall, it seems that policies provide a framework, but that RPT decisions are made on a case-by-case basis with considerable allowances made for differences from the norm.\n\n\nConclusions: The future of RPT\n\nExpectations and practices for review, promotion and tenure have shifted significantly over the last few decades. Although there are differences across institutions, disciplines, and faculty ranks, it is clear that faculty in many contexts are feeling increasing pressure to focus on research at the expense of teaching and service (Otten et al., 2015; van Dalen & Henkens, 2012). In recent years there has been an effort to help the pendulum swing back the other way by allowing for consideration of more varied measures of performance (e.g., altmetrics or non-traditional publishing mediums), but these efforts have not been entirely successful in offsetting oversimplified approaches such as points schemes based on journal impact factors. As a result, those faculty who wish to value activities beyond traditional research publications in so-called high-prestige venues may face barriers to career advancement.\n\nAlthough there are frustrations with RPT practices, this doesn’t mean the RPT process is fixed as it is today. The noticeable shift towards greater emphasis on research and particular types of publications, along with the documented efforts to counteract those trends, are signs that RPT practices do not go uncontested. Part of this challenge to the current status-quo was the San Francisco Declaration on Research Assessment (DORA), drafted at the Annual Meeting of The American Society for Cell Biology in December 2012 (Cagan, 2013) and since signed by over 450 organizations and almost 12,000 individuals (DORA, n.d.). The declaration makes several recommendations that are directly aimed at pushing back on some of the trends in researcher assessment highlighted in this review. In particular, they recommend, among other things, that researchers, and those involved in assessing research: 1) “Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion or funding decisions;” 2) “Be explicit about the criteria used to reach hiring, tenure and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published;” and 3) “When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than on publication metrics.” The global effect of these recommendations on changing the current RPT practices, however, remains largely unknown.\n\nDORA has inspired much updating of policy and shifting of opinions away from the use of journal impact factors, but there is still a great need for action to elicit change in the actual procedures used in RPT evaluations. In working with DORA, Curry (2018) has observed numerous instances of RPT procedures that are maintaining the dominance of the impact factor in determining the value of research. The next step will focus on moving beyond declarations and focusing on finding ways for institutions and funding agencies to change their evaluation practices in the spirit of the declaration (Curry, 2018).\n\nAlthough DORA is promoting change in procedures for evaluation of academic research contributions, the issue of imbalance within the academic “trifecta” of research, teaching, and service remains. Faculty seem accepting of the idea that research may count more towards RPT than the other two elements, but failure to reward teaching and service devalues faculty work in these areas. It may be time to evaluate whether our institutions of higher education and mechanisms of scholarly communication can reflect Boyer's (1996) scholarship of engagement, in which scientific discovery (research) is a crucial function of the university, but so are functions deriving from teaching and service, such as the sharing of information across disciplines, the sharing of knowledge with students and the public, and the application of information to real world problems.\n\nBecause RPT criteria strongly influence where faculty will place their focus, RPT reform may be one of the most successful ways to effect change in the academic system. We believe there are two natural best next steps to devising an updated system for evaluating scientific merit: 1) to deepen our understanding of faculty and administrative perceptions of the current reward system and desires moving forward (see also Desrochers et al., 2018); and 2) to assess the relationship between the content of current RPT documents and their actual operationalization into existing practices. Together with the foundation of information presented in this review, progress in these directions will provide insight into how RPT should be reformed, and whether there may be additional targets for change within the academic system.\n\n\nData availability\n\nNo data is associated with this article.",
"appendix": "Grant information\n\nThis study was supported by the Open Society Foundations [OR2016-29841].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAcker S, Webber M: Discipline and publish: The tenure review process in Ontario universities. In Assembling and Governing the Higher Education Institution. Palgrave Macmillan, London. 2016; 233–255. Publisher Full Text\n\nAcord SK, Harley D: Credit, time, and personality: The human challenges to sharing scholarly work using Web 2.0. New Media & Society. 2013; 15(3): 379–397. Publisher Full Text\n\nAdler R, Ewing J, Taylor P, et al.: Citation statistics. Stat Sci. 2009; 24(1): 1. Publisher Full Text\n\nAn interview with Aaron Barlow, editor of Academe, the magazine of the American Association of University Professors. (n.d.), Retrieved October 14, 2016. Reference Source\n\nAndersen DL, Trinkle DA: “One or two is not a problem” or technology in the tenure, promotion, and review process a survey of current practices in U.S. history departments. Journal of the Association for History and Computing. 2001; 4(1). Reference Source\n\nAstin AW, Korn WS, Dey EL: The American College Teacher: National Norms for the 1989-90 HERI Faculty Survey. 1991. Reference Source\n\nBoyer EL: The scholarship of engagement. Bulletin of the American Academy of Arts and Sciences. 1996; 49(7): 18–33. Publisher Full Text\n\nBrembs B, Button K, Munafò, M: Deep impact: unintended consequences of journal rank. Front Hum Neurosci. 2013; 7: 291. PubMed Abstract | Publisher Full Text | Free Full Text\n\nButtliere BT: Using science and psychology to improve the dissemination and evaluation of scientific work. Front Comput Neurosci. 2014; 8: 82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCabrera D, Vartabedian BS, Spinner RJ, et al.: More Than Likes and Tweets: Creating Social Media Portfolios for Academic Promotion and Tenure. J Grad Med Educ. 2017; 9(4): 421–425. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCagan R: The San Francisco Declaration on Research Assessment. Dis Model Mech. 2013; 6(4): 869–870. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClaxton LD: Scientific authorship. Part 1. A window into scientific fraud? Mutat Res. 2005; 589(1): 17–30. PubMed Abstract | Publisher Full Text\n\nCocchio C, Awad N: The scholarly merit of social media use among clinical faculty. J Pharm Technol. 2014; 30(2): 61–68. Publisher Full Text | Free Full Text\n\nCoonin B, Younce L: Publishing in open access journals in the social sciences and humanities: Who’s doing it and why. In ACRL Fourteenth National Conference. 2009; 12–15. Reference Source\n\nCronin B, Overfelt K: E-journals and tenure. J Am Soc Inf Sci. 1995; 46(9): 700–703. Publisher Full Text\n\nCurry S: Let’s move beyond the rhetoric: It’s time to change how we judge research [News]. 2018. Publisher Full Text\n\nDagenais Brown J: Citation searching for tenure and promotion: An overview of issues and tools. Reference Services Review. 2014; 42(1): 70–89. Publisher Full Text\n\nDarling ES, Shiffman D, Côté IM, et al.: The role of Twitter in the life cycle of a scientific publication. ArXiv Preprint ArXiv: 1305.0435. 2013. Reference Source\n\nDennis AR, Valacich JS, Fuller MA, et al.: Research standards for promotion and tenure in information systems. MIS Quarterly. 2006; 30(1): 1–12. Publisher Full Text\n\nDesrochers N, Paul-Hus A, Haustein S, et al.: Authorship, citations, acknowledgments and visibility in social media: Symbolic capital in the multifaceted reward system of science. Soc Sc Inform. 2018; 57(2): 223–248. Publisher Full Text\n\nDiamantes T: Online survey research of faculty attitudes toward promotion and tenure. Essays in Education. 2004; 12. Reference Source\n\nDiamond RM, Adam BE: Changing priorities at research universities: 1991-1996. Syracuse, N.Y.: Syracuse University. 1998. Reference Source\n\nDORA: Signers – DORA. (n.d.); Retrieved February 24, 2018. Reference Source\n\nEllison J, Eatman TK: Scholarship in Public: Knowledge Creation and Tenure Policy in the Engaged University. Imagining America.2008; 16. Reference Source\n\nEstabrook L, Warner B: The book as the gold standard for tenure and promotion in the humanistic disciplines. Committee on Institutional Cooperation. 2003. Reference Source\n\nFairweather JS: Faculty reward structures: Toward institutional and professional homogenization. Res High Educ. 1993; 34(5): 603–623. Publisher Full Text\n\nFoos A, Holmes MA, O’Connell S: What does It take to get tenure? Papers in the Geosciences. Paper 88. 2004.\n\nFox JW: Can blogging change how ecologists share ideas? In economics, it already has. Ideas Ecol Evol. 2012; 5(2): 74–77. Publisher Full Text\n\nGardner SK, Veliz D: Evincing the ratchet: A thematic analysis of the promotion and tenure guidelines at a striving university. Rev High Educ. 2014; 38(1): 105–132. Publisher Full Text\n\nGarfield E: Journal impact factor: a brief review. CMAJ. 1999; 161(8): 979–980. PubMed Abstract | Free Full Text\n\nGenshaft J, Wickert J, Gray-Little, B, et al.: Consideration of Technology Transfer in Tenure and Promotion. Technol Innov. 2016; 17(4): 197–204. Publisher Full Text\n\nGoldstein AO, Bearman RS: Community engagement in US and Canadian medical schools. Adv Med Educ Pract. 2011; 2: 43–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGordon CK: Organizational rhetoric in the academy: Junior faculty perceptions and roles. University of North Texas. 2008. Reference Source\n\nGreen RG: Tenure and promotion decisions: The relative importance of teaching, scholarship, and service. J Soc Work Educ. 2008; 44(2): 117–128. Publisher Full Text\n\nGreen RG, Baskind FR: The second decade of the faculty publication project: Journal article publications and the importance of faculty scholarship. J Soc Work Educ. 2007; 43(2): 281–296. Publisher Full Text\n\nGruzd A, Staves K, Wilk A: Tenure and promotion in the age of online social media. Proceedings of the American Society for Information Science and Technology. 2011; 48(1): 1–9. Publisher Full Text\n\nGuarino CM, Borden VMH: Faculty Service Loads and Gender: Are Women Taking Care of the Academic Family? Res High Educ. 2017; 58(6): 672–694. Publisher Full Text\n\nHarley D, Acord SK, Earl-Novell S, et al.: Assessing the future landscape of scholarly communication: An exploration of faculty values and needs in seven disciplines. Center for Studies in Higher Education. 2010. Reference Source\n\nHarley D, Earl-Novell S, Acord SK, et al.: Interim report: Assessing the future landscape of scholarly communication. Center for Studies in Higher Education. 2008. Reference Source\n\nHaustein S, Larivière V: The use of bibliometrics for assessing research: Possibilities, limitations and adverse effects. In Incentives and Performance. Springer, Cham. 2015; 121–139. Publisher Full Text\n\nHenry F, Kobayashi A: The everyday world of racialized and indigenous faculty members in Canadian universities. In The Equity Myth: Racialization and Indigeneity at Canadian Universities. UBC Press, 2017; 115–154. Reference Source\n\nHicks D, Wouters P, Waltman L, et al.: Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015; 520(7548): 429–31. PubMed Abstract | Publisher Full Text\n\nHoward J: Rise of “altmetrics” revives questions about how to measure impact of research. Chron High Educ. 2013; 59(38): A6–A7. Reference Source\n\nJanvrin DJ, Lim JH, Peters GF: The perceived impact of journal of information systems on promotion and tenure. Journal of Information Systems. 2015; 29(1): 73–93. Publisher Full Text\n\nJohnsrud LK, Jarlais CDD: Barriers to Tenure for Women and Minorities. Rev High Ed. 1994; 17(4): 335–353. Publisher Full Text\n\nKing CJ, Harley D, Earl-Novell S, et al.: Scholarly communication: Academic values and sustainable models. Center for Studies in Higher Education. 2006. Reference Source\n\nLarivière V, Sugimoto CR: The Journal Impact Factor: A brief history, critique, and discussion of adverse effects. ArXiv: 1801.08992 [Physics]. 2018. Reference Source\n\nLiner GH, Sewell E: Research requirements for promotion and tenure at PhD granting departments of economics. Appl Econ Lett. 2009; 16(8): 765–768. Publisher Full Text\n\nLópez C, Margherio C, Abraham-Hilaire L, et al.: Gender Disparities in Faculty Rank: Factors that Affect Advancement of Women Scientists at Academic Medical Centers. Soc Sci. 2018; 7(4): 62. Publisher Full Text\n\nMacfarlane B: Defining and rewarding academic citizenship: The implications for university promotions policy. Journal of Higher Education Policy and Management. 2007; 29(3): 261–273. Publisher Full Text\n\nMahoney MJ: Open exchange and epistemic progress. Am Psychol. 1985; 40(1): 29–39. Publisher Full Text\n\nMalsch B, Tessier S: Journal ranking effects on junior academics: Identity fragmentation and politicization. Crit Perspect Accoun. 2015; 26: 84–98. Publisher Full Text\n\nMamiseishvili K, Miller MT, Lee D: Beyond Teaching and Research: Faculty perceptions of service roles at research universities. Innov High Educ. 2016; 41(4): 273–285. Publisher Full Text\n\nMartinez MA, Chang A, Welton AD: Assistant professors of color confront the inequitable terrain of academia: a community cultural wealth perspective. Race Ethn Educ. 2017; 20(5): 696–710. Publisher Full Text\n\nMay DC: The nature of School of Education faculty work and materials for promotion and tenure at a major research university. 2005. Reference Source\n\nMenges RJ, Exum WH: Barriers to the Progress of Women and Minority Faculty. J Higher Educ. 1983; 54(2): 123–144. Publisher Full Text\n\nMisra J, Lundquist JH, Holmes E, et al.: Status of Women: Gender and the Ivory Ceiling of Service Work in the Academy. 2011; Retrieved September 19, 2018. Reference Source\n\nNederhof AJ: Policy impact of bibliometric rankings of research performance of departments and individuals in economics. Scientometrics. 2008; 74(1): 163–174. Publisher Full Text\n\nNunez-Wolff CN: A study of the relationship of external funding to medical school faculty success. ScholarlyCommons. 2007. Reference Source\n\nO’Meara K: Change the tenure system. 2014; Retrieved March 18, 2018. Reference Source\n\nO’Meara K, Eatman T, Petersen S: Advancing engaged scholarship in promotion and tenure: A roadmap and call for reform. Liberal Educ. 2015; 101(3). Reference Source\n\nOpen Science Collaboration: PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015; 349(6251): aac4716. PubMed Abstract | Publisher Full Text\n\nOtten JJ, Dodsen EA, Fleishhacker S, et al.: Getting research to the policy table: a qualitative study with public health researchers on engaging with policy makers. Prev Chronic Dis. 2015; 12: E56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPfeiffenberger JA, Rhoney DH, Cutler SJ, et al.: Perceptions of tenure and tenure reform in academic pharmacy. Am J Pharm Educ. 2014; 78(4): 75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPiwowar H: Altmetrics: Value all research products. Nature. 2013; 493(7431): 159. PubMed Abstract | Publisher Full Text\n\nProttas DJ, Shea-Van Fossen RJ, Cleaver CM, et al.: Relationships among faculty perceptions of their tenure process and their commitment and engagement. Journal of Applied Research in Higher Education. 2017; 9(2): 242–254. Publisher Full Text\n\nReinstein A, Hasselback JR, Riley ME, et al.: Pitfalls of using citation indices for making academic accounting promotion, tenure, teaching load, and merit pay decisions. Issues in Accounting Education. 2011; 26(1): 99–131. Publisher Full Text\n\nRoss HH, Edwards WJ: African American faculty expressing concerns: breaking the silence at predominantly white research oriented universities. Race Ethn Educ. 2016; 19(3): 461–479. Publisher Full Text\n\nRoss-Hellauer T: What is open peer review? A systematic review [version 2; referees: 4 approved]. F1000Res. 2017; 6: 588. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSanberg PR, Gharib M, Harker PT, et al.: Changing the academic culture: valuing patents and commercialization toward tenure and career advancement. Proc Natl Acad Sci U S A. 2014; 111(18): 6542–6547. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScheinfeldt T: Making it count: Toward a third way.2008; Retrieved January 23, 2017. Reference Source\n\nSeipel MM: Assessing publication for tenure. J Soc Work Educ. 2003; 39(1): 79–88. Publisher Full Text\n\nSid W; Richardson Foundation Forum: Restructuring the university reward system.1997; Retrieved January 24, 2017. Reference Source\n\nSmesny AL, Williams JS, Brazeau GA, et al.: Barriers to scholarship in dentistry, medicine, nursing, and pharmacy practice faculty. Am J Pharm Educ. 2007; 71(5): 91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith R: Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006; 99(4): 178–182. PubMed Abstract | Free Full Text\n\nSoares MB: Collaborative research in light of the prevailing criteria for promotion and tenure in academia. Genomics. 2015; 106(4): 193–195. PubMed Abstract | Publisher Full Text\n\nSowell T: On the higher learning in America: Some comments. Public Interest. 1990; 68–78. Reference Source\n\nSuber P: Thoughts on prestige, quality, and open access. LOGOS: The Journal of the World Book Community. 2010; 21(1/2): 115–128. Publisher Full Text\n\nTang TLP, Chamberlain M: Attitudes toward research and teaching: Differences between administrators and faculty members. J Higher Educ. 1997; 68(2): 212–227. Publisher Full Text\n\nTennant JP, Dugan JM, Graziotin D, et al.: A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; referees: 2 approved]. F1000Res. 2017; 6: 1151. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThatcher SG: The challenge of open access for university presses. Learn Publ. 2007; 20(3): 165–172. Publisher Full Text\n\nThe Carnegie Foundation for the Advancement of Teaching: The condition of the professoriate: Attitudes and trends, 1989. The Carnegie Foundation for the Advancement of Teaching. 1989. Reference Source\n\nThe politics of science: Information World Review. 2010; (263): 10–10.\n\nvan Dalen HP, Henkens K: Intended and unintended consequences of a publish-or-perish culture: A worldwide survey. J Am Soc Inf Sci Technol. 2012; 63(7): 1282–1293. Publisher Full Text\n\nWalker RL, Sykes L, Hemmelgarn BR, et al.: Authors' opinions on publication in relation to annual performance assessment. BMC Med Educ. 2010; 10: 21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhitman ME, Hendrickson AR, Townsend AM: Research commentary. Academic rewards for teaching, research, and service: Data and discourse. Inf Syst Res. 1999; 10(2): 99–109. Publisher Full Text\n\nWhittaker JA, Montgomery BL, Martinez Acosta VG: Retention of Underrepresented Minority Faculty: Strategic Initiatives for Institutional Value Proposition Based on Perspectives from a Range of Academic Institutions. J Undergrad Neurosci Educ. 2015; 13(3): A136–A145. PubMed Abstract | Free Full Text\n\nWolff C, Rod A, Schonfeld R: Ithaka S+R US faculty survey 2015. Copyright, Fair, Use, Scholarly Communication, Etc. 2016; 17. Publisher Full Text\n\nWolfgang AP, Gupchup GV, Plake KS: Relative importance of performance criteria in promotion and tenure decisions: Perceptions of pharmacy faculty members. Am J Pharm Educ. 1995; 59(4): 342–347. Reference Source\n\nWren JD, Kozak KZ, Johnson KR, et al.: The write position. A survey of perceived contributions to papers based on byline position and number of authors. EMBO Rep. 2007; 8(11): 988–991. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYoun TIK, Price TM: Learning from the experience of others: The evolution of faculty tenure and promotion rules in comprehensive institutions. J Higher Educ. 2009; 80(2): 204–237. Publisher Full Text\n\nZuckerman H, Merton RK: Age, Aging and Age Structure in Science. In A Theory of Age Stratification. New York: Russell Sage Foundation. 1972; 292–356. Reference Source"
}
|
[
{
"id": "40287",
"date": "12 Nov 2018",
"name": "Jonathan P. Tennant",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a non-systematic review of review, promotion and tenure (RPT) guidelines, including the diversity of practices, historical context, and recent and ongoing developments happening in the world of scholarly communication. This effort is valuable due to the importance of these processes in the wider scholarly communication context, and their influence on cultural and social shifts within the world of scholarly research. The article is well-written, and I believe very timely given the importance of this debate within scholarly communications at the present.\nMy relevant expertise in reviewing this manuscript comes from being a researcher interested in developments in scholarly communication, of which issues to do with careers and incentives come up frequently. As such, I have a vested interest in seeing that research like this is published and widely communicated to stimulate further discussion on critical topics such as this.\nBasic reporting:\nThere do not appear to be any figures or data accompanying this manuscript. While not inherently problematic, I wonder if there are any potential images to go in this paper, simply to help break up the text for readers? The article is well-written, and should be of interest to both generalist and specialist audiences.\n\nGeneral comments:\nI noted several instances in the Abstract where the language was not as clear as it could be. A thorough copy editing is required before the manuscript is accepted for indexing. Concerning structure, the Introduction section finishes with the goals of the research, before launching into a ‘Previous Research’ section as a literature review. This section should be integrated into the Introduction section, with the whole section finishing with the aims/goals immediately before the Methods.\n\nAbstract:\nThe abstract is concise and conveys the context and main findings of the research. No key bits seem to be missing. I wonder, if there is space, if a final sentence could be provided just to reiterate the importance and potential impact of this work (similar to the opening sentence).\n\nIntroduction P1:\nProblems with peer review/reproducibility are more about research than ‘academia’, if you want to be niggly. Has anyone studied RPT guidelines and their impact on academia before in an empirical manner? I wonder if the paper of the authors recently published (Alperin et al., 20181) should be cited here, to make the link between these two clearer?\nP2:\nFor Buttliere’s quote, is it worth emphasizing here that thus the present incentives often encourage research to be conducted in ways that are not in the best interests of research, and the wider impact of research on society? Just for the sake of context, could you perhaps explain why the USA/Canada were singled out for this study? Is it just for the sake of simplicity and scope, or because there are things that are inherently different about these countries?\nIdentifying RPT issues and areas for reform P1:\nRe. the Diamond and Adam reference, is this experience based across all types of universities and disciplines, or is it more specific? What are ‘service activities’ in this context too? Actually, the same for the second point too. Are these universal concerns, or are there large areas where we actually don’t have any understanding of how faculty perceive these issues? I suspect that there are probably large gaps in our knowledge here. Here, it might help to briefly discuss who actually drafts these RPT guidelines?\nP2:\nI wonder if it’s worth noting that this link even has its own mantra, ‘publish or perish’, just to gain some familiarity with the issue. Perhaps it is also worth noting here the growth of Open Access publishing in recent years (Piwowar et al., 20182), and the relatively poor understanding that we have on its potential impact on hiring and decision-making processes? Perhaps the fact that OA has come from a combination of bottom-up and top-down approaches seems to have created a lot of uncertainty in this space, and an apparent tension between developments in publishing and career advancement, is worth noting here too. As this does also highlight the importance for this study.\nResearch, teaching, and service in the review process P1:\nJust a note here, is tenure something one seeks to attain in all countries? And on that note, I haven’t checked all the references within, but are they focused on the USA/Canada, or are some more broad or based on different geographic systems? I just wonder if it should be made clearer, as it might be a little confusing in a paper about the USA/Canada if some of the evidence being cited is based on a different geographic region. Another thought. This is a non-systematic review, correct? Could the authors perhaps comment on how they selected the articles for this review? I’m not by any means an expert of the literature, and cannot tell from an objective point of view if the articles discussed within are representative or not of the total literature on this topic.\nP2:\nI feel that the detailed criticism of ‘excellence’ by Moore et al. (20173) should be mentioned here for important additional context. ‘Surveyed over’ – typo, space missing. Very important, I know.\nPerceptions of the shift towards prioritizing research in career advancement P1:\nI wonder here if it is worth noting the article by McKiernan et al. (20164) which makes a strong case for ‘open’ research practices being beneficial to the career of an individual researcher? I think this fits in because it is sort of a different ‘type’ of research style that is becoming influential, perhaps.\nPerceptions of the balance between research, teaching, and service across institution type, academic position, and demographics P1:\nI really like this discussion, and all of it so far. It’s all relevant, important, and ties into the theme of the paper without any waffle. I wonder though if it is worth discussing more the potential consequences of these perceptions though. For example, the seeking of high impact journals, salami slicing of publications, questionable research practices, the impact on the very social culture of academia, only researching topics perceived to be of high interest rather than academic importance. All of these things seem to be related to the increasingly performance-driven research system in some way.\nQuantity, quality, and prestige of publications for RPT P1:\nI think perhaps it is worth clarifying here what prestige means in this context. Are quantity, quality, and prestige all independent too? Potential Venn diagram alert here. I wonder if here it would be a good idea to cite some of the work by Björn Brembs (Brembs, 20185 and Brembs et al. (20136) on issues to do with journal prestige? It seems potentially relevant to readers.\nP2:\nCould you expand on what is meant by ‘impact’ here? Or any of the other descriptive terms? Or are they just vague? Is there any research out there that shows that peer review affects research quality (positively or negatively) that could be used to enhance this discussion here?\nP3:\nFor Foos et al. (2004), is that per year or in total before attaining tenure? I think at some point in this section, it needs to be noted that the focus on peer reviewed scholarly research articles as primary outputs for assessment is a ridiculously discriminatory process; for example, against data collectors/managers, software engineers, lab technicians (etc.) that are critical for the process, but can often be excluded from final publication author lists.\nP7:\nWhat might some of the consequences be of this prioritization of authorship? How might it affect the ways in which authorship orders are determined?\nDefining the quality of scholarship P1:\nFirst sentence, citation needed.\nP2:\nDo you think this binary state of peer reviewed versus non-peer reviewed in demarcating quality is appropriate, and evidence-based? You don’t have to cite me on this, but I have particular issues with this black and white approach to quality differentiation, and feel there are better ways it can be done (Tennant, 20187). What are the ‘certain categories’ here? I see it is explained in the next paragraph, but could be linked better perhaps.\nP3:\nBy ‘reward faculty’ here, do you mean beyond giving them a career? How do you define ‘appropriate’ here? For the wider research community, for the wider public, to align with the mission of the research institute? What impact does the hunt for prestige have on new entrants to the scholarly publishing ‘market’? Could you possibly discuss where this demand for prestigious publications comes from? Surely this was not always the case? Does it have any impact on academic culture, research practices, and public mission of universities?\nP5:\nIs the IF calculation that simple? Can it be replicated? Are the data open? Who controls the data, and are they biased in any way? Are some negotiated? For such an important statistic, I think these things might need commenting on. Need to just clarify that the inappropriateness is due to the ‘level’ of the proxy and the lack of correlation between this metric and the level that it is often used for in assessments. And also, perhaps that this was never its intended use. I feel that there is a certain preprint that could be cited here too. I wonder if it is worth further commenting on the fact that the use of the IF in such a manner is a profoundly non-scientific practice, has little basis in reason, and yet seems to be one of the defining features in governing modern academic culture.\nP6:\n‘Shortcut’ compared to what?\nP7:\nThis is a really important piece of discussion, and one which comes up over and over again in defense of using the impact factor. Could the authors possibly comment on some potential solutions to this pain point, from their point of view? (I see the next paragraph touches on this a bit).\nModern approaches to evaluating research output P1:\nCould a couple of examples of each be mentioned here? Some discussion of potentially novel ways to provide incentives and reputation are given here again too, largely based on utilization of academic social networks (Tennant, 20187).\nP3:\nMaybe worth discussing some of the ideas that revolve around altmetrics and social impact here. And also the distinction between an altmetric number (e.g., like via Altmetric), and the utility of the context that comes with this?\nP4:\nWas the lack of focus on other things than high impact publications here explicit? Are there any more recent studies on this issue than Harley et al.? In eight years there seems to have been a lot of changes on this topic, although perhaps not well studied. I think both DORA and ASAPbio at least have anecdotal data that might be useful context here.\nP7:\nCould you describe what is meant by ‘societal impact’ here? And perhaps how traditional publishing, new forms of communication, and altmetrics fit into this concept.\nBeyond RPT guidelines P6:\nHas anyone besides Estabrook and Warner (2003) ever conducted a study into the relationship between RPT guidelines and the actual practices of those involved in the process? Is this a major gap in our understanding here?\nConclusions P2:\nI feel some credit should be given to the Leiden Manifesto here too. Are there any other initiatives that warrant mentioning too?\nP3:\nIs there any evidence as to whether DORA has had a true impact or not? I see the Curry (2018) article is cited here, but perhaps some explicit examples can be given. I know it is beyond the USA/Canada, but perhaps developments with ‘Plan S’ can also be cited here, with their recommendations to follow DORA or an equivalent (also includes the Gates Foundation now as a USA-based funder).\nP4:\nDo you feel the imbalance of this trifecta is perhaps one of the causes behind a general system of inertia towards fairer, or more rigorous, research(er) evaluation processes?\nP5:\nWhat might some of the wider impacts of these two steps be within the present and future system of research?\nCongratulations to the authors on a great piece of work, and I look forward to seeing their research published in a revised form at some point. Please note that virtually all of the comments here are simply questions or comments to improve the argumentation style and narrative of the paper, which in my view is otherwise sound and a valuable contribution to the scholarly record.\nSincerely,\nJonathan Tennant\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
},
{
"id": "42353",
"date": "04 Jan 2019",
"name": "Aaron Barlow",
"expertise": [
"Reviewer Expertise My primary area of research is American Culture and",
"as former Faculty Editor of Academe",
"the magazine of the American Association of University Professors",
"academia itself."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is sound as it is. In fact, it is more than sound; it is an excellent presentation of an important contemporary issue in higher education. The questions raised below are not of what needs to be addressed in the article but that need to be addressed following it.\nThe questions surrounding Reappointment Promotion Tenure (RPT) in contemporary colleges and universities need to be addressed with care, compassion and comprehensively. This article is a good start toward focusing ongoing discussion in a way that can be useful across the range of Canadian and American institutions of higher education. There are a number of areas I would like to see more exploration of, but I think they are going to require further research and consideration; there is certainly not room for them here though the article does point to them.\nOne of the areas is the relative importance of teaching, scholarship and service in different institutional situations. Where there is great reliance on adjuncts for teaching, for example, service takes on an importance it may not have when there is adequate tenured and tenure-track faculty to cover departmental needs. Where students are reaching college with inadequate preparation, also, an emphasis on teaching may be more important than in a situation of selective admissions. These points are recognized in the article, certainly, but a great deal more consideration is warranted.\nAlso touched on but needing a great deal more exploration (though, again, in future articles) is what amounts to the lumping of various scholarship needs and standards in various fields. This is mentioned (the emphasis on books in certain fields, for example) but the history behind the movement toward a focus on peer review and other commonalities between fields is worthy of careful research. My suspicion is that it goes back at least to reaction to C. P. Snow’s The Two Cultures in the late 1950s when the humanities and social sciences began to model themselves after scientific disciplines. What should “scholarship” mean in different fields?\nThe article does present the idea, originating elsewhere, that a fourth category be added to scholarship, teaching and service, one that gives credit for scholarly and popularization work that is not peer reviewed but that benefits the community. This is another area that could lead to fruitful discussion, though the upshot might not be a fourth area but instead a broadening of what is considered scholarship in institutional settings.\nIt could also be fruitful, in terms of re-evaluating RPT, to see new and more specific historical studies presenting snapshots of RPT as it was practiced 25, 50, 75 and 100 years ago. American universities built their reputations over the last century; it would be interesting to see how tenure and promotion processes worked for past generations.\nFinally, I would like to see the essence of this article distilled into another piece that succinctly outlines the questions and possibilities raised in a way that can be used in departments with little interest in the specifics of the excellent research presented here but who need to be reassessing RPT procedures in light of changed teaching, scholarship and service environments.\nObviously, the subject of this article is worthy of a book. I hope, therefore, that the authors continue down the paths they explore here. Without national consideration of contemporary concerns relating to RPT in both Canada and the United States, problems such as predatory publishing, among others, will not only continue but will expand.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1605
|
https://f1000research.com/articles/7-514/v1
|
27 Apr 18
|
{
"type": "Review",
"title": "Role of CXCL13 in the formation of the meningeal tertiary lymphoid organ in multiple sclerosis",
"authors": [
"Ana C. Londoño",
"Carlos A. Mora",
"Ana C. Londoño"
],
"abstract": "Immunomodulatory therapies available for the treatment of patients with multiple sclerosis (MS) accomplish control and neutralization of peripheral immune cells involved in the activity of the disease cascade. However, their spectrum of action in the intrathecal space and brain tissue is limited, taking into consideration the persistence of oligoclonal bands and the variation of clones of lymphoid cells throughout the disease span. In animal models of experimental autoimmune encephalomyelitis, a blockage of CXCL13 has resulted in modification of the disease course and it could work as a potential complementary therapeutic strategy in patients with MS in order to postpone disease progression. The development of therapeutic alternatives with ability to reduce the intrathecal inflammatory activity of the meningeal tertiary lymphoid organ to ameliorate neurodegeneration is mandatory.",
"keywords": [
"multiple sclerosis",
"chemokines",
"CXCL13",
"B cells",
"tertiary lymphoid organ",
"meninges"
],
"content": "Introduction\n\nAlthough disease modifying therapy (DMT) agents in multiple sclerosis (MS) have contributed to reduction of neuroinflammation, they have not succeeded in the prevention of progression of disease. Inflammation is the appropriate immune response to infection, autoimmunity, cancer, injury and allograft transplantation1. When inflammation does not resolve appropriately, a prolonged immune response persists leading to tissue destruction and loss of function1. Chronic infiltration by immune cells in the meninges is believed to form transitory lymphoid cell aggregates which simulate secondary lymphoid organs (SLO), and are known as meningeal tertiary lymphoid organs (mTLO) which play an important role in the pathogenesis of autoimmunity1,2. The mTLO seem to play a role in the intrathecal activity of immune system cells in MS3. The SLO, such as lymph nodes, show a cellular organization that includes germinal centers (GC) containing antibody secreting and proliferating B-cells with follicular dendritic cells (FDC), a T-cell zone that incorporates naïve cells from the blood stream, high endothelial venules for extravasation of lymphocytes, and a stromal cell network that provides chemokines and extracellular matrix for cell migration and structural integrity1. Chemokines are a family of proteins with the specific property of regulating leukocytes in the immune system and they may play a role in neurotransmission and neuromodulation4. Leukocyte trafficking is mediated by inflammatory chemokines in inflamed tissues and by homeostatic chemokines in lymphoid sites5 (Figure 1). In this review, we focus on the role that CXCL13 (also known as B cell attracting chemokine [BCA-1], C-X-C motif ligand 13, or B lymphocyte chemoattractant [BLC]) plays in the formation of the mTLO in MS.\n\nB-cells originating in the bone marrow exit toward the blood stream as immature B-cells; they enter the SLO and specialize in the germinal centers producing memory B cells and plasmablasts, which in pathologic conditions, are able to gain access to the CNS. The TLO is formed in the meninges during chronic inflammation in the deep brain cortical sulci and share organogenesis with the SLO6. Podoplanin and the Th17 signature cytokine IL-17 have been associated with ectopic lymphoneogenesis in human diseases whereas BAFF is a key factor for mutation and survival of B cells which is produced by astrocytes in the CNS1,3. BAFF: B-cell activating factor of the tumor necrosis factor family; Balt: bronchial associated lymphoid tissue; CCL19: chemokine (c-c motif) ligand 19; CSF: cerebrospinal fluid; CXCL13: chemokine (C-X-C motif) ligand 13; FDC: follicular dendritic cells; GC: germinal center; LT: lymphotoxin α1β2/LTβR system; LLPC: long lived plasma cells; MS: multiple sclerosis; PC: plasma cell; SLO: secondary lymphoid organ; TLO: tertiary lymphoid organ.\n\n\nIn normal conditions, the SLO acquire information and prepare for immune defense\n\nThe SLO have a genetically determined pattern of development and programming that allows trapping and concentration of foreign antigens to initiate an adaptive immune response1. Mucosal associated and non-encapsulated lymphoid tissue (including the Peyer’s patches, adenoid tissue of the nasopharynx, tonsils, and the bronchial associated lymphoid tissue), together with lymphoid nodes and spleen, constitute the SLO7,8. The lymph node cortex contains clusters or primary follicles that include packaged B cells and FDC, whereas the node para-cortex has a lesser number of dendritic cells (DC) and T cells7. Generation of B cells with ability to produce auto antibodies usually occur in physiological conditions9. These auto antibodies are low affinity IgM, which exhibit a wide spectrum of reactivity and strong preference for soluble self-antigens on the cell surface9. Auto reactive low affinity B cells suffer apoptosis being unlikely they represent danger in normal conditions9.\n\n\nLymphoid cells are able to learn and exchange information at the GC\n\nThe GC present remarkable lymphocytic mitosis within SLO follicles10. Weyand et al. stated the GC are critical in the development of the B-cell normal immune response by driving-in cell division and maturation, B-cell selection with high affinity for immunoglobulin receptors and differentiation of B-cells and plasma cells (PC)2. Real time imaging technology has allowed visualization of the transit of the B cells from the dark zone to the light zone, and viceversa, during the maturation of the GC10–12. The GC light zone displays a predominance of FDC and follicular T-helper (Tfh) cells, whereas the dark zone contains closely packed lymphocytes and stromal cells10,13. The chemokine receptor CXCR4 is required for the positioning of the B cells in the dark zone where its ligand, CXCL12, is more abundant and is produced by stromal cells14. At the light zone, CXCL13 chemokine is concentrated in the FDC processes and, in conjunction with CXCR5, they contribute to the accumulation of B cells in this zone13,14. T-cells in the GC are essential to maintain signaling and represent approximately 5–20% of cell population10. Tfh cells are characterized by the expression of CXCR5 and ICOS, which is a subtype of Tfh cells10,15. Within the light zone, the three possible different outcomes for the centrocytes include death due to apoptosis; differentiation into memory B-cell or long lived plasma cells (LLPC); and re-entrance to the dark zone for a further round of cell mutation and selection16. The relevant function of the GC is, most likely, the primary production of memory B-cells and LLPC16,17 (Figure 2). Recent studies analyzing IgG heavy chain variable region genes in B cells from MS patients revealed that B cells are able to enter and exit the blood brain barrier in order get exposed to somatic hypermutation at the GC18–22.\n\nB-cells enter the dark zone of the germinal center (centroblasts), a step which depends on the expression of CXCR4 in the surface, where the cells go through proliferation and somatic hypermutation (SHM). Subsequently, the cells migrate to the light zone (centrocytes) where they capture antigens through the mutated B cell receptors and are internalized for presentation to the T cells. The centrocytes differentiate from the centroblasts by the level of expression of surface proteins. Centrocytes are CXCR4low, CD83high, CD86high and the centroblasts are CXCR4high, CD83 low y CD86 low. The fluctuation between centroblasts and centrocytes is part of a synchronized cellular program which permits a temporal separation of the processes of mitosis and SHM from selection. The functional output of the TLO, in comparison to the SLO, could result from the dysregulated nature of their GC response supporting a breakout of autoimmune variants and the development of long lasting humoral autoimmunity characterized by presence of B cells with minimal memory and LLPC16,17. FDC: follicular dendritic cells; LLPC: long lasting plasma cells; Tfh: T follicular helper cells.\n\n\nChemokines direct traffic of lymphocytes during the cell search for specific information\n\nThe induction of lymphoid chemokines, depends on lymphotoxin β (LT-β) and the tumor necrosis factor α (TNF-α) signaling on stromal cells and FDC23. Lymphotoxin α1β2 (LTα1β2) is expressed in the surface of B and T cells in the adult immune system and ligates to the lymphotoxin β receptor (LTβR) in reticular stromal cells thus inducing expression of lymphoid chemokines, such as CCL19, CCL21 and CXCL1324. These chemokines regulate the homeostatic traffic of lymphocytes in lymphoid organs and their distribution in the GC24. Homeostatic chemokines promote secretion of LTα1β2 by T and B cells, establishing a feedback loop that perpetuates the recruitment of lymphocytes and positional organization in the GC1. The chemokine CXCL13 has the following relevant properties:\n\n1. CXCL13 increases its own production by stimulating the growth of FDC after regulating LTα1β2 on the membrane of B cells5,25.\n\n2. CXCL13 is produced in the SLO by FDC and macrophages and is an important chemoattractant to the CNS26,27.\n\n3. Follicular stromal cells express CXCL13, which is needed for nesting CXCR5+B cells and a subset of T cells in the follicular compartment7.\n\n4. CXCL13 primarily works through CXCR5 expressed in mature B lymphocytes9, CD4+ Tfh28, CD4+ Th17 cells29, minor subset of CD8+ T cells and activated tonsil Treg cells9,29.\n\n5. CXCL13 has no relation with CD138+ and CD38+ plasmablasts, and PC18.\n\nStromal cells from the T cell zone express the chemokines CCL19 and CCL21, which share the receptor CCR7 that directs naïve, central memory T cells and DC to the T cell compartment7,30. CXCR5 is expressed in 20 to 30% of CD4+T cells in blood and CSF, and virtually in all B cells in blood and the majority of B cells in the CSF compartment31. Mice lacking CXCL13, or its receptor CXCR5, fail to develop peripheral lymph nodes1. Khademi et al. determined the concentration of CXCL13 in CSF of individuals with MS, other neurological diseases including viral and bacterial infection, and healthy controls finding higher levels of the chemokine in subjects with infections followed to a lesser extent by the patients with MS32. The levels of CXCL13 correlated negatively with disease span, concluding that early determination of CXCL13 might predict prognosis of disease32.\n\n\nThe TLO become operation centers, different than the SLO, with ability to magnify an autoimmune response\n\nBy maintaining antibody diversity, B cell differentiation, isotype switching, oligoclonal expansion, and local production of autoreactive PCs, the TLO perpetuate disease in response to environmental inputs33. The processes of biological development involved in lymphoid organogenesis are shared among the secondary and tertiary lymphoid structures2. Lymphoid organogenesis and formation of mTLO may be facilitated by expression of lymphotoxin α (LT-α) at the external layer of meningeal inflamed vessels leading to the compartmentalization of the immune response in MS18,34. The mTLO maintain differentiation and maturation of antigen specific effector lymphocytes which perpetuates inflammation and disease progression27. The TLO, besides SLO, provide a thriving environment where PC differentiate from plasmablasts7,27. In the absence of recirculating immune cells from the periphery, the TLO exerts its remarkable ability to remain active for several weeks35. Therefore, the neutralization of TLO could play a significant role by blocking the re-emergency of auto reactive clones that could be able to drive relapses or resistance to therapy35. Th17 cells, Tfh and a subtype of activated B cells, which are critical in systemic inflammation related with presence of TLO, are strongly associated with MS progression36.\n\n\nIn absence of CXCL13, a reduced inflammatory response emerges from studies on animal models and human pathology\n\nDisorganized B cell follicles in SLO have shown reduced capacity to originate natural antibody responses in CXCL13-/- mice25,37. Deficiency of CXCL13 results in a moderate course of disease characterized by a better recovery with attenuation of white matter inflammation and gliosis during the acute and chronic stage of EAE38. Krumbholz et al. showed there was a direct correlation between CXCL13 levels and the number of B cells, T cells and plasmabasts in the CSF of MS patients5. Clonal expansion and somatic hypermutation of B cells have been observed in the CSF of patients with MS39. CXCL13 was upregulated in active MS lesions but not in chronic inactive lesions and, in a similar range, in the serum of patients with relapsing remitting MS (RRMS) and control subjects indicating the intrathecal production of this chemokine5. CXCL13 was identified by immunohistochemistry in intrameningeal B-cell follicles, but not in the cerebral parenchyma, of chronic active or inactive MS lesions40. Patients with clinically isolated syndrome, who had shown conversion to clinically definitive MS within 2 years, had high levels of CXCL13 in the CSF32,41,42. Elevated levels of CXCL13 in CSF have also been reported in patients with RRMS compared to controls and the CSF levels have been significantly increased during relapses but declining after initiation of B cell depleting therapy23,32,43.\n\n\nA forthcoming research task: How early are the mTLO formed in the disease lifespan?\n\nMeningeal infiltrates can be disperse or well organized encompassing mTLO, whose lifespan is unknown27,40. The presence of follicles containing proliferating B cells, T cells, PC and FDC that express CXCL13 in the proximity of inflamed blood vessels in the meninges of patients with secondary progressive MS (SPMS) has been documented40. The mTLO correlated with neuronal loss, adjacent cortical demyelination and a more rapid progression of disease23. Patients with SPMS with positive mTLO have shown wide gray matter demyelination associated with loss of neurons, oligodendrocytes, and astrocytes; cortical atrophy, and microglial activation in the outer layer of the cortex44,45. It remains to be determined whether the formation of mTLO depends on the subtype of disease or it is the result of inflammation or consequence of chronicity35.\n\n\nCould CXCL13 be neutralized by direct action on itself, its receptor (CXCR5) or the lymphotoxin β (LT-β)?\n\nA novel therapeutic monoclonal antibody against CXCL13 (Mab 5261 and Mab 5261-muIg) has been shown to induce functional in vitro inhibition of the chemokine in humans and mice9. LT-β receptor blocking immunoglobulin inhibits CXCL13 interactions, suppresses the formation of mTLO in the CNS and ameliorates the symptoms of EAE in rodents24. In the EAE induced by the transfer of myelin-specific Th17 cells (Th17 EAE), Quinn et al. confirmed a role of Tfh cells by blocking Tfh trafficking using antibody against CXCL13 and found that this treatment significantly reduced expression of disease46. Some DMT available for the treatment for MS ameliorate levels of CXCL13, but the mechanisms by which it occurs are not completely understood. In patients with RRMS treated with natalizumab, a significant reduction in CXCL13 in CSF was observed in comparison to β-interferon47. In another study, Novakova et al. evaluated the effect of treatment with fingolimod in CSF biomarkers, including CXCL13, of MS patients who had previously been on β-interferon, glatiramer acetate, teriflunomide (and had to switch therapy because of breakthrough disease activity) or natalizumab (who had to switch due to risk of PML) observing significant reduction of CXCL13 in the CSF of patients in both groups48. Also, Alvarez et al. found that in patients with active RRMS, in spite of treatment with β-interferon or glatiramer acetate, the administration of rituximab led to a normalization of the CSF level of CXCL13 in the majority of patients, thus suggesting that high levels of CXCL13 in CSF at baseline could predict a forthcoming therapeutic response to B cell depletion49. Piccio et al. found that in patients with RRMS treated with IV rituximab, concomitant with either β-interferon or glatiramer acetate, there was a reduction of CXCL13 and CCL19 in CSF, which correlated with significant reduction of B cells (95%) and T cells (50%) in CSF31. Perry et al. found intrathecal reduction of CXCL13 (50.4%) and IgG index (13.5%) resulting from inhibition of development of lymphoid tissue inducer cells in patients with MS treated with daclizumab50. Braendstrup et al. reported the case of a patient with MS who had undergone allogenic hematopoietic stem cells transplant for treatment of follicular lymphoma and who after two years presented negative determination of oligoclonal bands and detectable CXCL13 in CSF51.\n\n\nIs a complementary intrathecal therapy for deactivation of the mTLO necessary to arrest disease progression?\n\nA self-sustained intrathecal inflammation fostered by CSF chemokines involved in the traffic and survival of inflammatory cells occurs early in disease and is orchestrated by mTLO3. Studies have shown that lineage of B cells can travel through peripheral blood, cervical lymphoid nodes, and the intrathecal compartment where they can be exposed to somatic hypermutation in the mTLO and return to peripheral blood18. As mentioned above, Piccio et al. found that CSF CXCL13 and CCL19 were decreased at week 24 after IV rituximab31 However, Topping et al. found that therapy with intrathecal rituximab in patients with RRMS and SPMS resulted in no variation of CXCL13 levels in serum and CSF during the period of evaluation52. Bonnan has hypothesized that, in order to prevent an unwanted generalized immune suppression resulting from systemic targeting of resident TLO, intrathecal immune reset should be attempted with a combination of monoclonal antibodies targeting each cell sub-type and aimed at eliminating simultaneously B cells, T cells, PC and FDC, via the intrathecal route. Excepting rituximab, candidate drugs still require preclinical studies for validation3.\n\n\nConclusion\n\nAn early neutralization of CXCL13 would interfere with the organization and function of the mTLO thus modifying and reducing inflammation in the CNS of patients with MS. Studies in animal models where CXCL13 has been neutralized, or is not expressed (such as the CXCL13-/- mice), confirm its crucial role maintaining, rather than initiating, inflammation and its manipulation could lead to modification of disease in these models37. However, any therapeutic strategy unable to neutralize LLPCs or antibody secreting cells will not be successful in an attempt to impede the chronic progression of disease53. Neutralization of the CXCL13 should be sought as complementary therapy to the DMT in MS.\n\n\nData availability\n\nNo data is associated with this article.",
"appendix": "Competing interests\n\n\n\nCAM is a member of the Data & Safety Monitoring Board for the NINDS/NIH study NS003055-08/NS003056-08. He has received no compensation for his participation in that study. ACL does not report any competing interests.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThis work was presented, in preliminary version, at the Third annual Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS Forum 2018) in San Diego, CA, on February 2, 2018. Poster presentation No. P193. Financial support for the on-line publication of this article was provided by The Department of Neurology, MedStar Georgetown University Hospital.\n\n\nReferences\n\nJones GW, Jones SA: Ectopic lymphoid follicles: inducible centres for generating antigen-specific immune responses within tissues. Immunology. 2016; 147(2): 141–151. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeyand CM, Kurtin PJ, Goronzy JJ: Ectopic lymphoid organogenesis: a fast track for autoimmunity. Am J Pathol. 2001; 159(3): 787–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBonnan M: Intrathecal immune reset in multiple sclerosis: exploring a new concept. Med Hypotheses. 2014; 82(3): 300–309. PubMed Abstract | Publisher Full Text\n\nMélik-Parsadaniantz S, Rostène W: Chemokines and neuromodulation. J Neuroimmunol. 2008; 198(1–2): 62–68. PubMed Abstract | Publisher Full Text\n\nKrumbholz M, Theil D, Cepok S, et al.: Chemokines in multiple sclerosis: CXCL12 and CXCL13 up-regulation is differentially linked to CNS immune cell recruitment. Brain. 2006; 129(Pt 1): 200–211. PubMed Abstract | Publisher Full Text\n\nMeinl E, Krumbholz M, Hohlfeld R: B lineage cells in the inflammatory central nervous system environment: migration, maintenance, local antibody production, and therapeutic modulation. Ann Neurol. 2006; 59(6): 880–892. PubMed Abstract | Publisher Full Text\n\nDrayton DL, Liao S, Mounzer RH, et al.: Lymphoid organ development: from ontogeny to neogenesis. Nat Immunol. 2006; 7(4): 344–53. PubMed Abstract | Publisher Full Text\n\nRuddle NH: Lymphoid neo-organogenesis: lymphotoxin’s role in inflammation and development. Immunol Res. 1999; 19(2–3): 119–125. PubMed Abstract | Publisher Full Text\n\nKlitmatcheva E, Pandina T, Reilly C, et al.: CXCL13 antibody for the treatment of autoimmune disorders. BMC Immunol. 2015; 16(1): 6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAllen CD, Okada T, Cyster JG: Germinal-center organization and cellular dynamics. Immunity. 2007; 27(2): 190–202. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHauser AE, Junt T, Mempel TR, et al.: Definition of germinal-center B cell migration in vivo reveals predominant intrazonal circulation patterns. Immunity. 2007; 26(5): 655–667. PubMed Abstract | Publisher Full Text\n\nSchwickert TA, Lindquist RL, Shakhar G, et al.: In vivo imaging of germinal centres reveals a dynamic open structure. Nature. 2007; 446(7131): 83–87. PubMed Abstract | Publisher Full Text\n\nCyster JG, Ansel KM, Reif K, et al.: Follicular stromal cells and lymphocyte homing to follicles. Immunol Rev. 2000; 176(1): 181–193. PubMed Abstract | Publisher Full Text\n\nAllen CD, Ansel KM, Low C, et al.: Germinal center dark and light zone organization is mediated by CXCR4 and CXCR5. Nat Immunol. 2004; 5(9): 943–952. PubMed Abstract | Publisher Full Text\n\nVinuesa CG, Tangye SG, Moser B, et al.: Follicular B helper T cells in antibody responses and autoimmunity. Nat Rev Immunol. 2005; 5(11): 853–865. PubMed Abstract | Publisher Full Text\n\nAlsughayyir J, Pettigrew GJ, Motallebzadeh R: Spoiling for a Fight: B Lymphocytes as Initiator and Effector Populations within Tertiary Lymphoid Organs in Autoimmunity and Transplantation. Front Immunol. 2017; 8: 1639. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBannard O, Horton RM, Allen CD, et al.: Germinal center centroblasts transition to a centrocyte phenotype according to a timed program and depend on the dark zone for effective selection. Immunity. 2013; 39(5): 912–924. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlauth K, Owens GP, Bennett JL: The Ins and Outs of B Cells in Multiple Sclerosis. Front Immunol. 2015; 6: 565. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBittner S, Ruck T, Wiendl H, et al.: Targeting B cells in relapsing-remitting multiple sclerosis: from pathophysiology to optimal clinical management. Ther Adv Neurol disord. 2017; 10(1): 51–66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVon Büdingen HC, Kuo TC, Sirota M, et al.: B cell exchange across the blood-brain-barrier in multiple sclerosis. J Clin Invest. 2012; 122(12): 4533–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPalanichamy A, Apeltsin L, Kuo TC, et al.: Immunoglobulin class-switched B cells form an active immune axis between CNS and periphery in multiple sclerosis. Sci Transl Med. 2014; 6(248): 248ra106. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStern JN, Yaari G, Vander Heiden JA, et al.: B cells populating the multiple sclerosis brain mature in the draining cervical lymph nodes. Sci Transl Med. 2014; 6(248): 248ra107. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIrani DN: Regulated Production of CXCL13 within the Central Nervous System. J Clin Cell Immunol. 2016; 7(5): pii: 460. PubMed Abstract | Publisher Full Text | Free Full Text\n\nColumba-Cabezas S, Griguoli M, Rosicarelli B, et al.: Suppression of established experimental autoimmune encephalomyelitis and formation of meningeal lymphoid follicles by lymphotoxin beta receptor-Ig fusion protein. J Neuroimmunol. 2006; 179(1-2): 76–86. PubMed Abstract | Publisher Full Text\n\nAnsel KM, Ngo VN, Hyman PL, et al.: A chemokine-driven positive feedback loop organizes lymphoid follicles. Nature. 2000; 406(6793): 309–314. PubMed Abstract | Publisher Full Text\n\nCarlsen HS, Baekkevold ES, Morton HC, et al.: Monocyte-like and mature macrophages produce CXCL13 (B cell-attracting chemokine 1) in inflammatory lesions with lymphoid neogenesis. Blood. 2004; 104(10): 3021–7. PubMed Abstract | Publisher Full Text\n\nMitsdoerffer M, Peters A: Tertiary Lymphoid Organs in Central Nervous System Autoimmunity. Front Immunol. 2016; 7: 451. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFazilleau N, Mark L, McHeyzer-Williams LJ, et al.: Follicular helper T cells: lineage and location. Immunity. 2009; 30(3): 324–335. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLim HW, Hillsamer P, Kim CH: Regulatory T cells can migrate to follicles upon T cell activation and suppress GC-Th cells and GC-Th cell-driven B cell responses. J Clin Invest. 2004; 114(11): 1640–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLuther SA, Tang HL, Hyman PL, et al.: Coexpression of the chemokines ELC and SLC by T zone stromal cells and deletion of the ELC gene in the plt/plt mouse. Proc Natl Acad Sci U S A. 2000; 97(23): 12694–12699. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPiccio L, Naismith RT, Trinkaus K, et al.: Changes in B- and T-lymphocyte and chemokine levels with rituximab treatment in multiple sclerosis. Arch Neurol. 2010; 67(6): 707–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhademi M, Kockum I, Andersson ML, et al.: Cerebrospinal fluid CXCL13 in multiple sclerosis: a suggestive prognostic marker for the disease course. Mult Scler. 2011; 17(3): 335–343. PubMed Abstract | Publisher Full Text\n\nCorsiero E, Nerviani A, Bombardieri M, et al.: Ectopic Lymphoid Structures: Powerhouse of Autoimmunity. Front Immunol. 2016; 7: 430. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorcione A, Casazza S, Ferretti E, et al.: Recapitulation of B cell differentiation in the central nervous system of patients with multiple sclerosis. Proc Natl Acad Sci U S A. 2004; 101(30): 11064–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPitzalis C, Jones GW, Bombardieri M, et al.: Ectopic lymphoid-like structures in infection, cancer and autoimmunity. Nat Rev Immunol. 2014; 14(7): 447–462. PubMed Abstract | Publisher Full Text\n\nRomme Christensen J, Börnsen L, Ratzer R, et al.: Systemic inflammation in progressive multiple sclerosis involves follicular T-helper, Th17- and activated B-cells and correlates with progression. PLoS One. 2013; 8(3): e57820. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRainey-Barger EK, Rumble JM, Lalor SJ, et al.: The lymphoid chemokine, CXCL13, is dispensable for the initial recruitment of B cells to the acutely inflamed central nervous system. Brain Behav Immun. 2011; 25(5): 922–931. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBagaeva LV, Rao P, Powers JM, et al.: CXC chemokine ligand 13 plays a role in experimental autoimmune encephalomyelitis. J Immunol. 2006; 176(12): 7676–7685. PubMed Abstract | Publisher Full Text\n\nColombo M, Dono M, Gazzola P, et al.: Accumulation of clonally related B lymphocytes in the cerebrospinal fluid of multiple sclerosis patients. J Immunol. 2000; 164(5): 2782–9. PubMed Abstract | Publisher Full Text\n\nSerafini B, Rosicarelli B, Magliozzi R, et al.: Detection of ectopic B-cell follicles with germinal centers in the meninges of patients with secondary progressive multiple sclerosis. Brain Pathol. 2004; 14(2): 164–74. PubMed Abstract | Publisher Full Text\n\nFerraro D, Galli V, Vitetta F, et al.: Cerebrospinal fluid CXCL13 in clinically isolated syndrome patients: Association with oligoclonal IgM bands and prediction of Multiple Sclerosis diagnosis. J Neuroimmunol. 2015; 283: 64–69. PubMed Abstract | Publisher Full Text\n\nBrettschneider J, Czerwoniak A, Senel M, et al.: The chemokine CXCL13 is a prognostic marker in clinically isolated syndrome (CIS). PLoS One. 2010; 5(8): e11986. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSellebjerg F, Börnsen L, Khademi M, et al.: Increased cerebrospinal fluid concentrations of the chemokine CXCL13 in active MS. Neurology. 2009; 73(23): 2003–2010. PubMed Abstract | Publisher Full Text\n\nMagliozzi R, Howell OW, Reeves C, et al.: A Gradient of neuronal loss and meningeal inflammation in multiple sclerosis. Ann Neurol. 2010; 68(4): 477–493. PubMed Abstract | Publisher Full Text\n\nHowell OW, Reeves CA, Nicholas R, et al.: Meningeal inflammation is widespread and linked to cortical pathology in multiple sclerosis. Brain. 2011; 134(Pt 9): 2755–2771. PubMed Abstract | Publisher Full Text\n\nQuinn JL, Kumar G, Agasing A, et al.: Role of TFH cells in Promoting T Helper 17-Induced Neuroinflammation. Front Immunol. 2018; 9: 382. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNovakova L, Axelsson M, Khademi M, et al.: Cerebrospinal fluid biomarkers as a measure of disease activity and treatment efficacy in relapsing-remitting multiple sclerosis. J Neurochem. 2017; 141(2): 296–304. PubMed Abstract | Publisher Full Text\n\nNovakova L, Axelsson M, Khademi M, et al.: Cerebrospinal fluid biomarkers of inflammation and degeneration as measures of fingolimod efficacy in multiple sclerosis. Mult Scler. 2017; 23(1): 62–71. PubMed Abstract | Publisher Full Text\n\nAlvarez E, Piccio L, Mikesell RJ, et al.: Predicting optimal response to B-cell depletion with rituximab in multiple sclerosis using CXCL13 index, magnetic resonance imaging and clinical measures. Mult Scler J Exp Transl clin. 2015; 1: 2055217315623800. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerry JS, Han S, Xu Q, et al.: Inhibition of LTi cell development by CD25 blockade is associated with decreased intrathecal inflammation in multiple sclerosis. Sci Transl Med. 2012; 4(145): 145ra106. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBraendstrup P, Langkilde AR, Schreiber K, et al.: Progression and CSF Inflammation after Eradication of Oligoclonal Bands in an MS Patient Treated with Allogeneic Hematopoietic Cell Transplantation for Follicular Lymphoma. Case Rep Neurol. 2012; 4(2): 101–106. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTopping J, Dobson R, Lapin S, et al.: The effects of intrathecal rituximab on biomarkers in multiple sclerosis. Mult Scler Relat Disord. 2016; 6: 49–53. PubMed Abstract | Publisher Full Text\n\nEggers EL, Michel BA, Wu H, et al.: Clonal relationships of CSF B cells in treatment-naive multiple sclerosis patients. JCI Insight. 2017; 2(22): pii: 92724. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "33520",
"date": "30 Apr 2018",
"name": "Hans Lassmann",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the review article the authors highlight the potential importance of tertiary lymph follicle like structures in the meninges of MS patients as driving forces for tissue damage and in particular cortical demyelination. They provide a very good summary of immunological mechanisms, which are involved in the organization and function of secondary lymph follicles in the peripheral lymphatic tissue and in particular highlight the importance of the interaction of CXCL13 with other cytokines and chemokines in these processes. They then describe in detail the evidence for the presence of structures with features of tertiary lymph follicles in the meninges of MS patients and their association with disease severity and cortical pathology. Finally they also review in detail the observations that CXCL13 is present in the CSF and may serve as a biomarker associated with poor prognosis of the disease. Based on the experimental observation that CXCL13 blockade or genetic ablation ameliorates EAE the authors propose that therapeutic blockade of CXCL13 in the CNS compartment of MS patients may be beneficial.\n\nThere is now good cumulative evidence that such follicle like inflammatory aggregates in the meninges are an important substrate of disease pathology in MS and that B-lymphocytes play an important, but so far not fully understood pathogenetic role in the disease. It is also clear that CXCL13 is an important chemokine, involved in B-cell recruitment into the central nervous system. However, it may be premature at the present time to propose intrathecal CXCR13 blockade as a therapy for MS patients. The EAE studies are only of limited value. It is not a surprise to ameliorate EAE with a therapy, which has major effects on the organization and function of peripheral lymphatic tissue. Although some EAE models show lymphocytic aggregates in the meninges, which share some features with those in MS, this is not the case in the majority of the models. Furthermore, in the respective mouse EAE models with lymph follicle like aggregates in the meninges there is no cortical demyelination. Thus lesion pathogenesis apparently is quite different between these models and MS. To what extent an intrathecal blockade of CXCL13 has an effect on CNS inflammation and what kind of effect will be achieved, is currently unknown. Whether this may induce dangerous side effects is also unclear. The suggestion to combine such a treatment with simultaneous intrathecal elimination of B-cells, T-cells and other immune cells is also far away from realization. Elimination with currently used antibodies requires complement of antibody dependent cellular cytotoxicity, and whether this is safe to induce within the CNS compartment in patients is also rather uncertain. Thus this review addresses a topic, which is interesting in a disease such as multiple sclerosis, but the suggestions for therapeutic translation are currently premature and potentially dangerous.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Partly",
"responses": [
{
"c_id": "3688",
"date": "30 May 2018",
"name": "Carlos Mora",
"role": "Author Response",
"response": "We thank Dr. Lassmann (Referee 1) for the valuable comments addressed upon the review of version No. 1 of our article. We agree on the inconclusive current state of knowledge on the possible effect of intrathecal blockade of CXCL13 during CNS inflammation. On the concept of efficacy of monoclonal antibodies as modulators of the immune response in the cerebrospinal fluid (CSF), special mention deserves the work reported by Komori et al. on the insufficient inhibition of activity of disease, upon the administration of intrathecal rituximab, in chronic progressive multiple sclerosis (MS). These investigators found out that the efficacy of a monoclonal antibody in CSF will not be substantial as long as the blood brain barrier remains closed. Following rituximab therapy, depletion of B-cells in the CSF was facilitated by complement dependent cytotoxicity (CDC) and, to a lesser degree, by antibody dependent cellular cytotoxicity. Although a decrement in the concentration of complement would reduce the efficacy of CDC, the addition of complement in the CSF could lead to adverse effects in the CNS tissue [reference: Komori M, Lin YC, Cortese I, Blake A, Ohayon J, Cherup J, et al. Insufficient disease inhibition by intrathecal rituximab in progressive multiple sclerosis. Ann Clin Transl Neurol 2016;3(3):166-179 doi: 10.1002/acn3.293]. In relation to the concept of ‘therapeutic translation’ mentioned in the review, specifically on the effect of an eventual combination of intrathecal blockade of CXCL13 with simultaneous intrathecal elimination of B-cells, T-cells and other immune cells, we do agree this therapeutic approach would be premature and could be potentially harmful for the recipients of such combined therapies clarifying that the hypothesis formulated by Bonnan [ref. 3 in the article] does not make any reference to the intrathecal blockade of CXCL13 in MS. We look forward to hearing further comments from reviewers prior to publication of version No. 2 of the article."
},
{
"c_id": "3706",
"date": "06 Jun 2018",
"name": "Hans Lassmann",
"role": "Reviewer Response F1000Research Advisory Board Member",
"response": "I aree with the comment of the authors."
}
]
},
{
"id": "35347",
"date": "16 Jul 2018",
"name": "Anneli Peters",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this review article the authors describe the role of the chemokine CXCL13 in the formation of meningeal TLOs in MS and suggest it as a therapeutic target. The article is well written and the first half of the article provides a very detailed overview of the components and requirements for formation of secondary lymphoid organs. The authors then switch to tertiary lymphoid organs in the CNS assuming that all components and mechanisms of formation are identical to SLOs. While this may be the case for some of the most developed TLOs in some autoimmune diseases like Myasthenia gravis, it is not so clear which cell types and molecular players are required for formation of meningeal TLOs. In fact, to my knowledge it has not even been formerly proven that CXCL13 is required for formation of meningeal TLOs. Even though it is quite likely considering detection of CXCL13 in mTLOs and elevated CXCL13 levels in the CSF of MS patients, definitive proof even in the animal model is missing as also pointed out by reviewer 1, because a) CXCL13-deficient mice already have a defect in mounting proper immune responses in SLOs and b) active EAE induced by MOG-peptide/CFA immunization does not prominently feature mTLOs. The mouse models that do feature mTLOs such as the spontaneous 2D2xTh mouse have not been studied in the context of CXCL13 deficiency.\nFurthermore, it would be very useful to discuss in this review cellular sources of CXCL13 in the CNS, as they may not be identical to SLOs. Thus, microglia (Ref 37) and meningeal stromal cells (Pikor et al., Immunity, 20151) have been suggested as sources for CXCL13 and should be discussed.\nAnother important point is that the authors state that the \"mTLO maintain differentiation and maturation of antigen-specific lymphocytes which perpetuate inflammation and disease progression\". This is not a fact but a hypothesis and should be stated as such. While it is clearly an attractive hypothesis there is no proof neither in mouse models, nor in MS. We agree with the authors that in MS occurence of TLOs has been associated with more severe disease course and cortical lesions, however, causality has not been demonstrated and even evidence for maturation of antigen-specific lymphocytes in mTLOs is very limited so far. Therefore, we believe that it is not justifiable to interfere with mTLO formation in MS patients, as long as their biological function and consequences are not much better understood.\nAs a side note, some sentences are a bit unclear, for example in the introduction \"Inflammation is the appropriate immune response to...autoimmunity...\" (pg 2) and \"Tfh cells are characterized by expression of CXCR5 and ICOS, which is a subtype of Tfh cells\" (pg 3).\nOverall, the review has a very interesting and important topic, however, in my opinion as detailed above in several paragraphs the wording should be a bit more careful and precise in order not to be misleading.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Partly",
"responses": [
{
"c_id": "3960",
"date": "13 Sep 2018",
"name": "Carlos Mora",
"role": "Author Response",
"response": "Author response to Reviewer 2.We also thank Dr. Peters for her important observations to the content of the first version of the manuscript. We concur with the fact that the current knowledge on the formation of the secondary lymphoid organs (SLO) should not be unquestionably extrapolated and applied to the understanding of the genesis of the tertiary lymph nodes (TLO) especially in the context of neuroinflammation. We also understand that some of the existing concepts on this topic should still remain at a hypothetical, instead of conclusive, level of consideration. Yet, thanks to this reviewer comments and encouragement, especially in relation to discussion of possible cellular sources of CXCL13 in the CNS and to the search for further literature supporting a role for CXCL13 in the EAE animal models, we were able to expand in depth the content of the manuscript and bring more interesting material to the discussion giving further support to the role of this chemokine in the pathogenesis of MS. Certainly, patient safety is a demanding priority and any application of the knowledge acquired from in vitro or EAE animal model studies to the treatment of patients with MS should be considered with extreme caution."
}
]
}
] | 1
|
https://f1000research.com/articles/7-514
|
https://f1000research.com/articles/7-1599/v1
|
03 Oct 18
|
{
"type": "Research Article",
"title": "Raising the status of software in research: A survey-based evaluation of the Software Sustainability Institute Fellowship Programme",
"authors": [
"Shoaib Sufi",
"Caroline Jay",
"Caroline Jay"
],
"abstract": "Background: This paper reports the results of an evaluation of the Software Sustainability Institute’s Fellowship Programme, which focused on identifying and categorising the benefits that the fellowship has afforded its recipients, via a series of open questions. Methods: The evaluation took the form of a survey open to people awarded Fellowships between 2012 and 2016, which asked people to report the effect that the programme had had on them, their institutions, their research domains and their careers. Results: The results show that the Fellowship plays a wide-ranging role in supporting communities of best practice and skills transfer, and that a significant benefit is the way it has raised the profile of software in research, and those people who develop and advocate for it. Conclusions: The evaluation of the programme has shown the need to support research software in situ and credit the engineers and researchers who are working in this important area that supports reproducibility, reuse and the integrity of research investments.",
"keywords": [
"research software",
"fellowship"
],
"content": "Introduction\n\nThe Fellowship programme1 run by the UK Software Sustainability Institute (SSI) is a unique package of financial support, networking and advice, which is competitively awarded to members of the research software community. The main goals of the programme are to encourage Fellows to develop their interests in the area of software sustainability and help them to become ambassadors of good software practice in their domains. The programme offers £3000 to support event attendance, workshops, training and other activities to help build awareness, capability and capacity in computational techniques, reproducible research and open science in diverse research domains.\n\nFellows are selected via an open competition, where candidates are judged by a panel of experts (former Fellows and Institute staff members) in terms of their track record in practising and promoting software sustainability, and the activities they plan to run with the Fellowship award. To promote diversity, funding is allocated to people at different career stages (from PhD student to research leader) and a variety of domains (e.g. Glaciology, Research Software Engineering, Humanities and Astrophysics). The overarching aim of the Fellowship programme is to provide support and recognition to those people promoting sustainable software practices, and advocating for and producing more verifiable, shareable and useful research outputs.\n\nThis paper reports the results of a recent survey evaluation of the programme’s effects on its recipients and their wider communities. A thematic analysis of the results shows that the award of a Fellowship had substantial and wide-ranging benefits both for the Fellows themselves, and for their institutions and research domains. The theme that emerged most strongly and consistently was that the Fellowship provided status to both the Fellows themselves and the role of software within research. Respondents reported that current academic culture does not always afford recognition to research software and research software engineers, and that the Fellowship has played a key role in improving the visibility of this ubiquitous yet undervalued component of research methodology.\n\nAn earlier version of this article is available on PeerJ as a preprint https://doi.org/10.7287/peerj.preprints.26849v1.\n\nA number of other fellowship providers have published evaluations of their programmes, including the Humboldt Foundation2, the Erwin Schrodinger Fellowships3 and the Union for International Cancer Control (UICC)4. These evaluations used a combination of surveys, data held about the fellows (e.g. demographics, subject areas), and in the case of UICC, case studies. The reports are openly available, but do not constitute peer-reviewed research. Here, we take a different approach, treating the evaluation as a research project (for which ethical approval was obtained), asking primarily open questions, and only including data that were obtained via the study. By conducting the work in this way, we aim to contribute empirically to the software sustainability literature, as well as gaining a local understanding of the Fellowship programme’s impact.\n\n\nMethods\n\nThe survey was conducted using the University of Manchester SelectSurvey.NET instance to ensure the data was collected and stored securely. Participants were contacted via email using the all-fellows mailing list; all current and previous Fellows who are still in contact with the Institute are subscribed to this list. The survey was conducted from the 12th July 2017 to 31st August 2017. After the initial email there were two reminder emails and we chased two individual Fellows who had only made partial survey entries to see if they would offer complete entries (which the subsequently did).\n\nThe initial part of the survey explained what the purpose of this research was and asked for consent from participants. Participants were asked to confirm that they agreed to participate, that they understood that participation was voluntary, that they understood their data would remain confidential, and that they permitted anonymous quotes to be published. They were able to say ‘Yes’ or ‘No’ to any of these questions. All participants included in the analysis answered ‘Yes’ to all of these questions. There was a further question around retention, “I agree to my data being retained indefinitely for further research related to the Fellowship Programme.” All participants bar one answered ‘Yes’ to this question.\n\nThe survey then asked Fellows to comment on the benefits of the programme in a number of categories, and to report any negative consequences and suggested improvements (see Table 1). The survey was sent to the entire population of the 2012–2016 Fellows (78 in total). The study received approval from the Computer Science School Panel (ref: 2017-2308-3295) on the delegated authority of the University Research Ethics Committee (UREC), University of Manchester .\n\nFellows were asked to provide information about gender, year in which their Fellowship was awarded, which funding bodies supported their work and their research area. It also asked about their current job role, job role at the time the Fellowship was awarded, and specific research area, but this information is not reported here as the small number of participants means it may be possible to identify individuals from this data.\n\nThe free text answers were thematically analyzed in an open coding fashion following established analysis methods5: 1) familiarization with data, 2) generating the initial codes, 3) searching for themes, and 4) iteratively reviewing themes. The generated codebook was agreed between the authors.\n\n\nResults\n\nThere was a response rate of 33% (N = 26). Seven fellows from 2016 responded, 8 from 2015, 6 from 2014, 4 from 2013 and 1 from 2012. One of the respondents (Caroline Jay) is an author of this paper, and her results have thus been excluded from the analysis, leaving a total of 25 respondents.\n\nFive respondents were female and 21 were male. Table 2 shows the funding bodies that supported the respondents’ research.\n\nThe centre column shows the number of respondents listing the body as their primary funder. The right hand column shows the number of respondents listing the body as an additional funder.\n\nIn answer to the question, ‘Do you think being awarded a Software Sustainability Institute Fellowship has benefitted you?’ 96% (n = 24) answered ‘yes’. One person answered ‘unsure’ and zero people answered ‘no.’\n\nIn answer to the question, ‘Do you think being a Fellow has helped to advance your career?’ 72% (n = 18) answered ‘yes,’ 16% (n = 4) answered ‘no’, and 12% (n = 3) answered ‘unsure.’\n\nThe first author coded the dataset into a number of initial themes. These were grouped into overarching themes by the second author, which were then used as a codebook for the answers to the questions ‘How has the Fellowship benefitted you/your institution(s)/your domain/others?’. The results were checked by the first author for agreement. The emergent themes are described in the bulleted list below.\n\n• Status: giving status and recognition to individuals and organisations for their role in sustaining software, and to sustainable software practices themselves.\n\n• Community/network: organizing/attending events; building professional and personal networks.\n\n• Professional development: improving one’s own skills through undertaking training and improving the skills of others by providing training.\n\n• Resources: obtaining resources for travel and other professional activities.\n\nTable 3 shows the number of respondents who reported a benefit under each theme for the categories that the questions asked about: self, institution, domain and others. In the following sections we explore each of these themes in turn.\n\nAcross the questions, 31 comments were made in relation to the Fellowship leading to an improvement in “profile and prestige” (R5). The majority of these (twenty responses) were in relation to improving the status of the individual Fellow.\n\nThe impact on the Fellows’ status manifested itself in a number of ways, including: giving them recognition as someone who knew about software sustainability and good coding practices; providing a badge which opened doors and allowed them to market themselves; and becoming more appealing as collaborators at the institutional, domain and interdisciplinary level. Four respondents reported that having a Fellow raised the profile of a department or institution. Table 4 illustrates the impact of the Fellowship on status with quotations.\n\n*** indicates removed to preserve anonymity.\n\nThere was evidence that the credibility conveyed by the Fellowship potentially contributed to the Institute’s mission to improve diversity: “Despite getting a PhD partially from a computer science programme, I could see that my skills and knowledge were always at least to some extent dismissed or doubted. I do not want to speculate whether this is due to gender bias or some other prejudice-based process or my own failing at looking professional, but since being elected a SSI fellow I most definitely observed a significant drop in mansplaining.” (R10).\n\nFellows benefitted from joining a community of like-minded individuals and the networking opportunities that arose from this. Respondents made 27 comments in relation to the Fellowship improving their network, 14 of which showed that this benefit went beyond themselves, to improve the software research communities within their institution/domain. R23 said: “The fellowship has been hugely beneficial to me and my career. The contacts and collaborations formed during my fellowship year have led, directly and indirectly, to a huge number of opportunities.” The benefits included increasing confidence; feeling part of the research software community and not an outsider; sharing good practices; being able to identify as a Research Software Engineer (RSE) and supporting their role in formulating an RSE community of practice via the RSE Association (www.rse.ac.uk).\n\nRespondents reported that the Fellowship gave them the mandate to collaborate with different organisations and institutions, as well improving the local networking of those involved with research software. Three Fellows at one institution were able to work together.\n\nFellows from a single domain expressed that a number of them working with each other across years had had a cumulative effect over time, in effect seeding a hub of researchers/fellows who took sustainability seriously. There was a platform for them to then influence domain specific groups at different institutions increasing the impact and reach of promoting better sustainability practices. Fellows felt motivated to collaborate, form online communities, and contribute to the open source community.\n\nThe Fellowship ultimately provided community, friendship and motivation for new ways of doing things. The Fellowship also helped them become better scientists and ambassadors for sustainability issues in their community and thus better recognised. Table 5 illustrates the impact of the Fellowship on community and network with quotations.\n\nRespondents stated that the Programme had helped them to progress in their careers, either by way of a new job, promotion, or change in direction: “I can map my entire career trajectory from the opportunity that the fellowship gave me. One meeting led to another...” (R11).\n\nIn answer to the question, ‘If not already specified, how has being a Fellow helped your career progression?’ three respondents mentioned gaining confidence, three mentioned improving skills, seven mentioned improving their networks, and five mentioned improving their visibility. The programme had a significant effect for R23: “The fellowship, and then all the external collaborations and followed from it, have been directly cited as reasons for giving me top performance ratings over the last three years… Without this community of like-minded people to engage with I'm not sure I'd still be working in the same organisation, or even in research software at all.”\n\nAcross the other questions, 17 comments related to professional benefits for the Fellows themselves that included: improving personal knowledge and practices; understanding how much of research is software driven; developing a habit for research related blogging; identifying new areas in their own research fields; and thinking about research software engineering as a career. Fellows increased their confidence in research software development, and they were able to get career, technical and other advice from other Fellows, mentors, institute staff and others they had met at workshops.\n\nThe Fellowship awards had an even greater impact on the professional development of others, with 26 comments relating to this altogether. Fellows ran training courses, such as Software Carpentry6, spread best practice via workshops, and supported data sharing and reproducibility initiatives. Table 6 illustrates the impact of the Fellowship on professional development with quotations.\n\nFellows used the £3000 award for attending conferences and workshops that they normally would not be able to; organising events; running training; kick-starting an initiative (such as a product, service or approach); and inviting visitors. Although not everyone used the funds: “My position is probably different to many fellows in that I mostly wanted to be a fellow to show support for the SSI and the fellows network/community and to highlight the importance of this area in my institution. Access to funds wasn't a consideration” (R3), across the respondents they supported a wide range of activities, summarised in Table 7.\n\nIn answer to the question, ‘Have there been any negative consequences of your fellowship?’ 14 people said there had not been anything negative, and 7 people did not give an answer. One person commented that they sometimes had to explain that software sustainability was not the same as digital preservation, and that this disappointed the person they were talking to. Three respondents gave lighthearted answers: “I definitely spend more time on Twitter because of you guys!” (R10); spending time “struggling with installing and implementing open source software (just kidding, though it takes time, I thoroughly enjoy learning new things, and it's an investment in the future)” (R11) and “a lack of time to take advantage of all the opportunities – not a bad problem to have!” (R23)\n\nAlthough the programme itself did not appear to result in negative consequences, R17 commented that their institution “was not interested in [the Fellowship] at all.”\n\nIn answer to the question, “How would you improve the Fellowship Programme?” six respondents did not make any suggestions. Nine respondents recommended increasing the number/length of events, and one raised an issue around the distance that they were required to travel for an event. One respondent suggested making more significant funds available to Fellows, including providing salary, and two commented that administration of funds could be improved. Three people had suggestions for improving mentoring, including having non-academic mentors, and using existing Fellows as mentors. Two respondents, who had both moved away from the UK, thought it would be good for the Institute to build stronger links internationally. Three respondents suggested having more explicit roles/activities for Fellows over the longer term.\n\n\nLimitations\n\nThe study focused on the benefits of the Fellowship Programme. We chose to use the word ‘benefit’, rather than ‘impact’, because we wanted people to reflect on the potential positives that came from the Fellowship in the broadest terms. Whilst the authors did not anticipate that the Fellowship would result in negative consequences, and a question checked for these explicitly, the phrasing of the questions could have biased respondents towards seeing the programme in a positive light. The survey only captured the responses of a third of Fellowship holders, so we do not know the experiences of the remaining two thirds.\n\n\nConclusion\n\nThe survey evaluation provided evidence that the Fellowship programme has played a significant role in supporting and galvanising engaged people in contributing to the domain of research software engineering. The gains in community building, networking, individual status, individual learning and the development of others, leading to long term benefits, initiatives and communities of practice are significant given the modest investment. Seed corn funding approaches are noted as being particularly effective mechanisms of support7. The evaluation of the programme has shown the need to support research software in situ and credit the engineers and researchers who are working in this important area that supports reproducibility, reuse and the integrity of research investments.\n\n\nData availability\n\nDataset 1: SSI Fellowship evaluation 2012-2016 survey free text. The free text questions and answers for the survey. CSV file. *** indicates removed to preserve anonymity, 10.5256/f1000research.16231.d2187038\n\nThe following is a description of the columns in the dataset:",
"appendix": "Grant information\n\nThis work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Grant EP/H043160/1 and EPSRC, BBSRC and ESRC Grant EP/N006410/1 for the UK Software Sustainability Institute.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nSufi S: The Software Sustainability Institute Fellowship Programme. CEUR Workshop Proceedings (CEUR-WS.org):WSSSPE: 4th Workshop on Sustainable Software for Science: Practice and Experiences. 1686. Reference Source\n\nTechnopolis: Evaluation of the Alexander von Humboldt Foundation’s Humboldt Research Fellowship Programme. 2009; [cited 2018 Aug 8]. Reference Source\n\nMeyer N, Bührer S: Impact Evaluation of the Erwin Schrödinger Fellowships with Return Phase. Karlsruhe: Fraunhofer Institute for Systems and Innovation Research ISI; 2014; 97. Reference Source\n\nKeith N, Jones R, Chow M: Impact Evaluation of the ICRETT, YY & ACSBI Fellowship Schemes. Union for International Cancer Control (UICC); 2013; [cited 2018 Aug 8]. Reference Source\n\nBraun V, Clarke V: Using thematic analysis in psychology. Qual Res Psychol. 2006 [cited 2018 Apr 11]; 2(4): 77–101. Publisher Full Text\n\nWilson G: Software Carpentry: lessons learned [version 2; referees: 3 approved]. F1000Res. 2016; 3: 62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe Royal Society, British Academy, Royal Academy of Engineering, et al.: Open for business: a nation of global researchers and innovators. [cited 2018 Apr 11]. Reference Source\n\nSufi S, Jay C: Dataset 1 in: Raising the status of software in research: A survey-based evaluation of the Software Sustainability Institute Fellowship Programme. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16231.d218703"
}
|
[
{
"id": "39053",
"date": "17 Oct 2018",
"name": "Colin C. Venters",
"expertise": [
"Reviewer Expertise Sustainable Software Engineering"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper reports the results of a qualitative study into the perceived effectiveness of the Software Sustainability Institute's Fellowship program between 2012-2016; a programme that provides fellows with financial support to attend a range of events to help build awareness in sustainable software engineering \"best\" practice and processes relevant to research software. The authors suggest that the results of the study reveal that the fellowship programme supports communities of best practice and knowledge transfer in the development of sustainable research software. However, this position is not sufficiently supported by the anecdotal evidence reported in the paper.\nThis was an extremely interesting paper to read; hopefully the first of many by the authors or the Institute that examine the efficacy of the programme and the impact of the Software Sustainability Institute in transforming the development of research software as a whole. The topic is of significant interest to those working in the field of research software engineering - especially potential fellows - and of general interest to the broader software engineering community as a whole as it pertains to having identified individuals that practice and promote sustainable software engineering \"best\" practice with a particular focus on research software as a first-class research object. The paper attempts to address a pertinent question regarding the efficacy of the fellowship programme as a mechanism for gathering intelligence about research and software from all disciplines and identifying and communicating good software best practice in a range of different domains.\nThe paper would benefit enormously from a background section that included the motivation for the creation of the Software Sustainability Institute in the first instance i.e. unsustainability of academic software, and linking this to the study by Hettrick et al. 20141 who demonstrated the importance of software in research. In addition, the paper would also benefit from introducing and defining key terms and concepts for readers unfamiliar with the topic of software sustainability and linking these to emerging themes being driven forward by leading researchers and groups in the fields of requirements engineering, software architectures, HCI, and software engineering in general that address software sustainability.\nThe overall research methodology is described in sufficient detail. It is somewhat surprising and disappointing that critical demographic data has not been reported in the paper including the recipient's job role and their institutions at the time the fellowships were awarded, as this information is already publicly available and free to harvest should anyone be interested in doing so; this would seem to be a major oversight and the rationale in the paper for its exclusion would appear to be rather weak. In addition, the paper would have benefited enormously by including data related to recipient’s track record and how have they promoted software sustainability prior to the fellowship programme as a comparison; this would have provided a baseline to establish how the programme transformed the recipients into ambassadors of good software practice in their domains. However, without having established what good or best practice means in the first instance, any claims are challenging at best to determine.\nIt is worth noting that not all fellows are research software engineers. As such, some analysis on the range of the recipient's job role would have been interesting since the programme is designed to promote diversity to people at different stages in their career.\nThe results of their analysis revealed a number of interesting findings. However, the main result revealed a worrying trend in that the primary outcome is simply an improvement in the status of beneficiaries of the fellowship programme. This raises serious questions as to whether the programme has failed to achieve its primary aim as a key driver of the programme is the promotion of sustainable software engineering practice to produce verifiable, shareable and useful research output.\nThe paper also contains several unsubstantiated statements to support a number of claims regarding the perceived benefits of the programme. The claim by one recipient that the collective fellowship in their domain has had a global impact cannot be substantiated and is at best anecdotal. Similarly, it is unclear how the programme helped recipients become \"better\" scientists. Against what baseline is this improvement being measured? Without any control mechanism in the study, it is impossible to demonstrate with any certainty that the perceived benefits of the programme could not have been achieved in another way.\nIt would have been interesting for the study to have considered what better sustainability practice means in practice and identify the range of sustainability issues that the variety of communities face. The use of a distributed version control system and commenting code does not in itself make software sustainable.\nBased on the results of this study the primary benefits of the fellowship programme are the sole recipients themselves. The benefits to the broader community outside the echo chamber have yet to be established and provide a fruitful area for further research. Overall this was an interesting study of an exciting fellowship programme that lays the foundation for future assessment of the programme as a whole in order to assess the wider benefits to the broader community at large.\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "4249",
"date": "22 Nov 2018",
"name": "Shoaib Sufi",
"role": "Author Response",
"response": "We thank the reviewer for his positive and helpful comments. We respond to the main themes in the reviewer’s comments and suggestions below:It should be noted that the Fellowship is broad in its aim, and the survey aimed to capture its impact in the broadest terms, and not just from the perspective of software sustainability skills and uptake. By way of example, the use of distributed version control and commenting code are necessary conditions for sustainable software but they may not be sufficient conditions for sustainable software. The authors recognise that the path to software sustainability is a journey and the adoption of better practices are way-marks on this journey.We decided to exclude background/demographic data of respondents as it may have enabled readers to identify individuals. It is true that information about the Fellows is available online; the issue in the current study is linking this to individual responses. Anonymity is an important part of the study methodology as a respondent’s public response may be more guarded than a private response. The dataset show individuals criticising other members of their research community or institution. It would not have been appropriate to include alongside these comments information that could be used to identify them, and would have violated our ethical approval.The data are valid qualitative responses from a survey, collected using a well-established research methodology, and analysed in a systematic manner, and are thus not purely anecdotal. Whilst it is true that we cannot validate the objective truth of any response, this is true of responses to any survey, and is thus an accepted limitation of the design. Fellows were specifically asked to reflect on their own experience, and the survey thus depends upon self-reporting. There is unfortunately no baseline against which to compare the responses, as we cannot know what they would have done had they not received Fellowship.The fact that status emerged as a key theme is interesting rather than worrying. Our aim with this research was to systematically examine the self-reported benefits of the Fellowship to individuals and others. This could have resulted in people talking only about the events they had run; the value of the status conferred by the badge of the Fellowship is a result that has wider relevance for the Research Software Engineering Community, as the comments show that software engineering work is still perceived as being of lower value, and the Fellowship is helping to change that. The study did not aim to assess directly whether the Fellowship was leading to more sustainable software, as this is not methodologically feasible. It should be noted that in terms of professional development more responses (26) related to the professional development of others than the Fellows own professional development (17), see Table 3. We see this as evidence of its benefit to the community.We appreciate the suggestions for improving the background and will add this to a future version of the paper detailing the motivation for the Institute. We will comment on the relationship of this work with that by Hettrick et al. on the importance of Software in Research. Generating a cross walk of software sustainability concerns to emerging themes in other research areas such as those mentioned (requirements engineering, software architectures, HCI, and software engineering) are out of scope for this paper, but might form useful future work.A study on what better sustainability practices mean and identifying sustainability issues and which communities face them is a very large endeavour and this developing area is the focus of organisations such as the UK Software Sustainability Institute (www.software.ac.uk), WSSSPE (wssspe.researchcomputing.org.uk), URSSI (urssi.us), BSSw (bssw.io) and related sustainability researchers. This is out of scope for this paper. Certainly, portions of this space would make for very interesting studies and we thank the reviewer for his comments and suggestions in this regard."
}
]
},
{
"id": "42711",
"date": "24 Jan 2019",
"name": "Dan Sholler",
"expertise": [
"Reviewer Expertise Technology adoption and resistance",
"digital infrastructure development",
"digital infrastructure governance",
"open science organizations",
"open source research software organizations",
"information systems",
"information science",
"organization science"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper presents the results of a survey investigating the impact of the UK Software Sustainability Institute’s (SSI) Fellowship Programme on recipients’ research, institutions, and careers. The authors note that their approach to Programme evaluation is novel in that, unlike other reports on programme outcomes (e.g., the Humboldt Foundation), it is a peer-reviewed study using only the data collected via survey. Therefore, the results contribute empirical findings to the software sustainability literature. The study found that the Programme elevated the perceived importance of software development for research as well as the status of the fellowship awardees. The authors state that the survey’s findings indicate the importance and value of research software and the people who develop it.\n\nThe paper also provides a sound overview of the SSI Fellowship’s goals: Supporting research software developers in their own work and in championing research software’s value (e.g., promoting reproducible research and open science). The introduction notes how awardees are selected and how the Programme fosters diversity in career stages and disciplines.\n\nOverall, I support the indexing of this study with one substantial addition (a discussion section) and several minor changes. I provide some details on what might be included in the discussion in Comment 3 below.\nThe authors clearly state that they wish to contribute to the literature on software sustainability. However, they do not provide any definitions of software sustainability, generally, and the types of work and workers needed to achieve sustainability. These additions are necessary to contextualize the findings within the broader discussion about software sustainability. Although I am sure the authors are familiar with this emerging literature because they have made substantial contributions to it, I provide some references to start with (Crouch et al., 20131, Calero et al., 20132, Jiménez et al., 20173, Katz et al., 20144 and Venters et al., 20145).\n\nThe authors could reflect more on why there were far more respondents who identified as male than female. For example, does this set of responses reflect the overall makeup of the Fellowship Programme? If not, why might that be?\n\nThe paper lacks a discussion that integrates and synthesizes the discrete findings sections. There are several possible ways forward to develop such a discussion question. The first suggestion is to contextualize the free text themes within the forced choice responses. For example, did the 4 respondents who answered “no” to the question about career advancement thematically respond to free text questions, particularly the one about negative impacts? Because the sample size is small, it may be possible to point out the threads in the responses of the “no” participants vs. the “yes” participants. Another way forward is to put all of the sections into conversation with one another. For example, in the discussion, the authors could discuss why (in relation to the existing literature or public discourse) fellowship awards had a greater impact on professional development of others than on the individuals. The “how” is already present and appreciated—e.g., Software Carpentry workshops. Likewise, the authors might return to statements such as R17’s institution not being interested in the fellowship, and how that relates to the reported benefits to the institution. In sum, some more reflection on the responses and how they relate to the broader discourse around software sustainability and support for research software development would be very much appreciated.\n\nI commend the authors for explicitly stating the themes in the findings section. I have one issue with how the authors collaborated for the qualitative coding of the free text responses: Did the authors use any “test data” or something similar to ensure inter-rater reliability? For a study like this one, I do not think it is important to report a quantitative measure of IRR, but a bit more detail about how authors reached agreement would be helpful and contribute to a perception of validity for the reader. One way of doing this is to briefly describe an example of disagreement between the two authors on a particular theme or instance of a theme and describe how the authors reached a resolution.\n\nWhat software, if any, was used for qualitative coding of the data?\n\nI thank the authors for providing Table 4 with some illustrative examples of responses. I also appreciate how the authors presented some examples in the text and place them in conversation with one another as you might see in an interview-based study (indeed, the prevalence of free-text questions lends itself nicely to presenting the results in this way).\n\nWith regard to the statement “Across the other questions, 17 comments related to professional benefits for the Fellows themselves that included: improving personal knowledge and practices; understanding how much of research is software driven; developing a habit for research related blogging; identifying new areas in their own research fields; and thinking about research software engineering as a career”: Can the authors provide some information about the prevalence of each category of professional benefits? Counts are not necessarily the only way to do this; the authors might add phrasing indicating whether one or more of the categories was more prominent than the others.\n\nIn the limitations section, the authors might also note the bias in who would respond to such a survey. In other words, awardees who had a positive experience might be more inclined to respond to a survey about the program’s benefits.\nMiscellaneous notes:\nThe authors might consider moving the sentence “The study received approval from the Computer Science School Panel (ref: 2017-2308-3295) on the delegated authority of the University Research Ethics Committee (UREC), University of Manchester” to the first paragraph under “Methods,” where it seems more appropriate.\n\nGiven the experience of the authors in this domain, to what extent do they agree with/disagree with the suggested improvements, and what other improvements do they suggest?\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "40285",
"date": "18 Feb 2019",
"name": "Lois Curfman McInnes",
"expertise": [
"Reviewer Expertise high-performance scientific computing",
"scalable numerical algorithms and software",
"scientific software ecosystems",
"software productivity and sustainability"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\n18th February 2019: This report was briefly published as an Approved report, and has now been updated to an Approved with Reservations at the request of the reviewer.\n\nThis paper evaluates the Fellowship Programme of the Software Sustainability Institute (SSI), which has the primary goals of encouraging Fellows to develop their interests in software sustainability and to become ambassadors of good software practice in their communities.\n\nThe paper analyses a survey of people who were Fellows during 2012-2016 (26 respondents of 78 Fellows contacted). The paper’s goals are contributing to literature on software sustainability and understanding the program’s impact. The results of the study include only data obtained from the study itself, which featured open questions. The paper explains the methods of the survey and discusses results, around four themes that emerged from analysis of free-text answers: status, community/network, professional development, and resources.\n\nThis paper and the SSI Fellowship Programme overall are of strong interest to international communities who are working to advance software practices as a key element of increasing overall scientific productivity. Overall, the paper is well written and clearly explains the approach and analysis of the survey. The analysis concludes that the Fellowship promotes the status of the role of research software and of the Fellows themselves. A key observation is that the Fellowship promotes community and provides a platform for Fellows to influence their domain-specific communities in advancing practices of research software. The Fellowship also contributes strongly to professional development.\n\nMy main criticism of the paper is the implicit assumption that the reader understands the importance of software sustainability and the scope of software practices addressed by the Fellowship Programme. I recommend adding background information about software sustainability and the SSI, including references, in order for the paper to be more effective as a stand-alone document.\n\nWhile I understand the reason for the authors to exclude information about employment during and after the Programme (preserving anonymity), it would be interesting to explore the changes in employers and roles over time (of all Fellows), and whether the SSI Fellowship Programme influenced that. Also, it would be interesting to explore changes in the software practices and culture of domain-specific communities, to help understand the longer-term impact of the SSI Fellowship Programme.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1599
|
https://f1000research.com/articles/7-952/v1
|
27 Jun 18
|
{
"type": "Method Article",
"title": "Swimming downstream: statistical analysis of differential transcript usage following Salmon quantification",
"authors": [
"Michael I. Love",
"Charlotte Soneson",
"Rob Patro",
"Charlotte Soneson",
"Rob Patro"
],
"abstract": "Detection of differential transcript usage (DTU) from RNA-seq data is an important bioinformatic analysis that complements differential gene expression analysis. Here we present a simple workflow using a set of existing R/Bioconductor packages for analysis of DTU. We show how these packages can be used downstream of RNA-seq quantification using the Salmon software package. The entire pipeline is fast, benefiting from inference steps by Salmon to quantify expression at the transcript level. The workflow includes live, runnable code chunks for analysis using DRIMSeq and DEXSeq, as well as for performing two-stage testing of DTU using the stageR package, a statistical framework to screen at the gene level and then confirm which transcripts within the significant genes show evidence of DTU. We evaluate these packages and other related packages on a simulated dataset with parameters estimated from real data.",
"keywords": [
"RNA-seq",
"workflow",
"differential transcript usage",
"Salmon",
"DRIMSeq",
"DEXSeq",
"stageR",
"tximport"
],
"content": "Introduction\n\nRNA-seq experiments can be analyzed to detect differences across groups of samples in total gene expression – the total expression produced by all isoforms of a gene – and additionally differences in transcript usage within a gene. If the amount of expression switches among two or more isoforms of a gene, then the total gene expression may not change by a detectable amount, but the differential transcript usage is nevertheless biologically relevant. While many tutorials and workflows in the Bioconductor project address differential gene expression, there are fewer workflows for performing a differential transcript usage analysis, which provides critical and complementary information to a gene-level analysis. Some of the existing Bioconductor packages and functions that can be used to detect differential transcript usage include BitSeq1, DEXSeq (originally designed for differential exon usage)2, diffSpliceDGE from the edgeR package3,4, diffSplice from the limma package5,6, DRIMSeq7, stageR8, and SGSeq9. The Bioconductor package IsoformSwitchAnalyzeR10 is well documented and can be seen as an alternative to this workflow; IsoformSwitchAnalyzeR allows for import of data from various quantification methods, including Salmon, and allows for statistical inference using DRIMSeq, as well as a rank-based statistical test of transcript proportions. In addition, IsoformSwitchAnalyzeR includes functions for obtaining the nucleotide and amino acid sequence consequences of isoform switching, which is not covered in this workflow. Other packages related to splicing can be found at the DifferentialSplicing BiocViews. For more information about the Bioconductor project and its core infrastructure, please refer to the overview by Huber et al.11.\n\nWe note that there are numerous other methods for detecting differential transcript usage outside of the Bioconductor project. The DRIMSeq publication is a good reference for these, having descriptions and comparisons with many current methods7. This workflow will build on the methods and vignettes from three Bioconductor packages: DRIMSeq, DEXSeq, and stageR.\n\nPreviously, some of the developers of the Bioconductor packages edgeR and DESeq2 have collaborated to develop the tximport package12 for summarizing the output of fast transcript-level quantifiers, such as Salmon13, Sailfish14, and kallisto15. The tximport package focuses on preparing estimated transcript-level counts, abundances and effective transcript lengths, for gene-level statistical analysis using edgeR3, DESeq216 or limma-voom6. tximport produces an offset matrix to accompany gene-level counts, that accounts for a number of RNA-seq biases as well as differences in transcript usage among transcripts of different length that would bias an estimator of gene fold change based on the gene-level counts17. tximport can alternatively produce a matrix of data that is roughly on the scale of counts, by scaling transcript-per-million (TPM) abundances to add up to the total number of mapped reads. This counts-from-abundance approach directly corrects for technical biases and differential transcript usage across samples, obviating the need for the accompanying offset matrix.\n\nComplementary to an analysis of differential gene expression, one can use tximport to import transcript-level estimated counts, and then pass these counts to packages such as DRIMSeq or DEXSeq for statistical analysis of differential transcript usage. Following a transcript-level analysis, one can aggregate evidence of differential transcript usage to the gene level. The stageR package in Bioconductor provides a statistical framework to screen at the gene-level for differential transcript usage with gene-level adjusted p-values, followed by confirmation of which transcripts within the significant genes show differential usage with transcript-level adjusted p-values8. The method controls the overall false discovery rate (OFDR)18 for such a two-stage procedure, which will be discussed in more detail later in the workflow. We believe that stageR represents a principled approach to analyzing transcript usage changes, as the methods can be evaluated against a target error rate in a manner that mimics how the methods will be used in practice. That is, following rejection of the null hypothesis at the gene-level, investigators would likely desire to know which transcripts within a gene participated in the differential usage.\n\nHere we provide a basic workflow for detecting differential transcript usage using Bioconductor packages, following quantification of transcript abundance using the Salmon method. This workflow includes live, runnable code chunks for analysis using DRIMSeq and DEXSeq, as well as for performing stage-wise testing of differential transcript usage using the stageR package. For the workflow, we use data that is simulated, so that we can also evaluate the performance of methods for differential transcript usage, as well as differential gene and transcript expression. The simulation was constructed using distributional parameters estimated from the GEUVADIS project RNA-seq dataset19 quantified by the recount2 project20, including the expression levels of the transcripts, the amount of biological variability of gene expression levels across samples, and realistic coverage of reads along the transcript.\n\n\nMethods\n\nFirst we describe details of the simulated data, which will be used in the following workflow. Understanding the details of the simulation will be useful for assessing the methods in the later sections. All of the code used to simulate RNA-seq experiments and write paired-end reads to FASTQ files can be found at an associated GitHub repository for the simulation code21, and the reads and quantification files can be downloaded from Zenodo22–25. Salmon13 was used to estimate transcript-level abundances for a single sample (ERR188297) of the GEUVADIS project19, and this was used as a baseline for transcript abundances in the simulation. Transcripts that were associated with estimated counts less than 10 had abundance thresholded to 0, all other transcripts were considered “expressed”. alpine26 was used to estimate realistic fragment GC bias from 12 samples from the GEUVADIS project, all from the same sequencing center (the first 12 samples from CNAG-CRG in Supplementary Table 2 from Love et al.26). DESeq216 was used to estimate mean and dispersion parameters for a Negative Binomial distribution for gene-level counts for 458 GEUVADIS samples provided by the recount2 project20. An example of DESeq2-generated estimates of dispersion per gene can be seen in Supplementary Figure 1. Note that, while gene-level dispersion estimates were used to generate underlying transcript-level counts, additional uncertainty on the transcript-level data is a natural consequence of the simulation, as the transcript-level counts must be estimated (the underlying transcript counts are not provided to the methods).\n\npolyester27 was used to simulate paired-end RNA-seq reads for two groups of 12 samples each, with realistic fragment GC bias, and with dispersion on transcript-level counts drawn from the joint distribution of mean and dispersion values estimated from the GEUVADIS samples. To compare DRIMSeq and DEXSeq in further detail, we generated an additional simulation in which dispersion parameters were assigned to genes via matching on the gene-level count, and then all transcripts of a gene had counts generated using the same per-gene dispersion. The first sample for group 1 and the first sample for group 2 followed the realistic GC bias profile of the same GEUVADIS sample, and so on for all 12 samples. This pairing of the samples was used to generate balanced data, but not used in the statistical analysis. countsimQC28 was used to examine the properties of the simulation relative to the dataset used for parameter estimation, and the full report can be accessed at the associated GitHub repository for simulation code21.\n\nDifferential expression across two groups was generated as follows: 70% of the genes were set as null genes, where abundance was not changed across the two groups. For 10% of genes, all isoforms were differentially expressed at a log fold change between 1 and 2.58 (fold change between 2 and 6). The set of transcripts in these genes was classified as DGE (differential gene expression) by construction, and the expressed transcripts were also DTE (differential transcript expression), but they did not count as DTU (differential transcript usage), as the proportions within the gene remained constant. To simulate balanced differential expression, one of the two groups was randomly chosen to be the baseline, and the other group would have its counts multiplied by the fold change. For 10% of genes, a single expressed isoform was differentially expressed at a log fold change between 1 and 2.58. This set of transcripts was DTE by construction. If the chosen transcript was the only expressed isoform of a gene, this counted also as DGE and not as DTU, but if there were other isoforms that were expressed, this counted for both DGE and DTU, as the proportion of expression among the isoforms was affected. For 10% of genes, differential transcript usage was constructed by exchanging the TPM abundance of two expressed isoforms, or, if only one isoform was expressed, exchanging the abundance of the expressed isoform with a non-expressed one. This counted for DTU and DTE, but not for DGE. An MA plot of the simulated transcript abundances for the two groups is shown in Figure 1.\n\nEach point depicts a transcript, with the average log2 abundance in transcripts-per-million (TPM) on the x-axis and the difference between the two groups on the y-axis. Of the transcripts which are expressed with TPM > 1 in at least one group, 77% are null transcripts (grey), which fall by construction on the M=0 line, and 23% are differentially expressed (green, orange, or purple). As transcripts can belong to multiple categories of differential gene expression (DGE), differential transcript expression (DTE), and differential transcript usage (DTU), here the transcripts are colored by which genes they belong to (those selected to be DGE-, DTE-, or DTU-by-construction).\n\nThis workflow was designed to work with R 3.5 or higher, and the DRIMSeq, DEXSeq, stageR, and tximport packages for Bioconductor version 3.7 or higher. Bioconductor packages should always be installed following the official instructions. The workflow uses a subset of all genes to speed up the analysis, but the Bioconductor packages can easily be run for this dataset on all human genes on a laptop in less than an hour. Timing for the various packages is included within each section.\n\n\nQuantification and data import\n\nWe used Salmon version 0.10.0 to quantify abundance and effective transcript lengths for all of the 24 simulated samples. For this workflow, we will use the first six samples from each group. We quantified against the GENCODE human annotation version 28, which was the same reference used to generate the simulated reads. We used the transcript sequences FASTA file that contains “Nucleotide sequences of all transcripts on the reference chromosomes”. When downloading the FASTA file, it is useful to download the corresponding GTF file, as this will be used in later sections.\n\nTo build the Salmon index, we used the following command. Recent versions of Salmon will discard identical sequence duplicate transcripts, and keep a log of these within the index directory.\n\n\n\nTo quantify each sample, we used the following command, which says to quantify with six threads using the GENCODE index, with inward and unstranded paired end reads, using fragment GC bias correction, writing out to the directory sample and using as input these two reads files. The library type is specified by -l IU (inward and unstranded) and the options are discussed in the Salmon documentation. Recent versions of Salmon can automatically detect the library type by setting -l A. Such a command can be automated in a bash loop using bash variables, or one can use more advanced workflow management systems such as Snakemake29 or Nextflow30.\n\n\n\nWe can use tximport to import the estimated counts, abundances and effective transcript lengths into R. We recommend to construct a CSV file that keeps track of the sample identifiers and any relevant variables, e.g. condition, time point, batch, and so on. Here we have made a sample CSV file and provided it along with this workflow’s R package.\n\nIn order to find this file, we first need to know where on the machine this workflow package lives, so we can point to the extdata directory where the CSV file is located. These two lines of code load the workflow package and find this directory on the machine. These two lines of code would therefore not be part of a typical workflow.\n\n\n\nThe CSV file records which samples are condition 1 and which are condition 2. The columns of this CSV file can have any names, although sample_id will be used later by DRIMSeq, and so using this column name allows us to pass this data.frame directly to DRIMSeq at a later step.\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe can then import transcript-level counts using tximport. We suggest for DTU analysis to generate counts from abundance, using the scaledTPM method described by Soneson et al.12. The countsFromAbundance option of tximport uses estimated abundances to generate roughly count-scaled data, such that each column will sum to the number of reads mapped for that library. We recommend scaledTPM for differential transcript usage so that the estimated proportions fit by DRIMSeq in the following sections correspond to the proportions of underlying abundance.\n\nIf instead of scaledTPM, we used the original estimated transcript counts (countsFromAbundance=\"no\"), or if we used lengthScaledTPM transcript counts, then a change in transcript usage among transcripts of different length could result in a changed total count for the gene, even if there is no change in total gene expression. This is because the original transcript counts and lengthScaledTPM transcript counts scale with transcript length, while scaledTPM transcript counts do not. For testing DTU using DRIMSeq and DEXSeq, it is convenient if the count-scale data do not scale with transcript length within a gene. Note that this could be corrected by an offset, but this is not easily implemented in the current DTU analysis packages. While this workflow only considers existing software features, we are considering developing a new countsFromAbundance method which would scale abundance for all transcripts of a gene by a fixed gene length, then each sample by its number of mapped reads, therefore balancing between the benefits of scaledTPM and lengthScaledTPM.\n\nThe following code chunk is not evaluated, but instead we will load a pre-constructed matrix of counts. The actual quantification files for this dataset have been made publicly available; see the Data availability section at the end of this workflow.\n\n\n\nBioconductor offers numerous approaches for building a TxDb object, a transcript database that can be used to link transcripts to genes (among other uses). We ran the following unevaluated code chunks to generate a TxDb, and then used the select function with the TxDb to produce a corresponding data.frame called txdf which links transcript IDs to gene IDs. In this TxDb, the transcript IDs are called TXNAME and the gene IDs are called GENEID. The version 28 human GTF file was downloaded from the GENCODE website when downloading the transcripts FASTA file.\n\n\n\nOnce the TxDb database has been generated and saved, it can be quickly reloaded:\n\n\n\n\nStatistical analysis of differential transcript usage\n\nWe load the cts object as created in the tximport code chunks. This contains count-scale data, generated from abundance using the scaledTPM method. The column sums are equal to the number of mapped paired-end reads per experiment. The experiments have between 31 and 38 million paired-end reads that were mapped to the transcriptome using Salmon.\n\n\n\n\n\n\n\n\n\nWe also have the txdf object giving the transcript-to-gene mappings (for construction, see previous section). This is contained in a file called simulate.rda that contains a number of R objects with information about the simulation, that we will use later to assess the methods’ performance.\n\n\n\n\n\n\n\n\n\n\n\n\n\nIn order to run DRIMSeq, we build a data.frame with the gene ID, the feature (transcript) ID, and then columns for each of the samples:\n\n\n\nWe can now load the DRIMSeq package and create a dmDSdata object, with our counts and samps data.frames. Typing in the object name and pressing return will give information about the number of genes:\n\n\n\n\n\nThe dmDSdata object has a number of specific methods. Note that the rows of the object are gene-oriented, so pulling out the first row corresponds to all of the transcripts of the first gene:\n\n\n\n\n\n\n\n\n\nIt will be useful to first filter the object, before running procedures to estimate model parameters. This greatly speeds up the fitting and removes transcripts that may be troublesome for parameter estimation, e.g. estimating the proportion of expression among the transcripts of a gene when the total count is very low. We first define n to be the total number of samples, and n.small to be the sample size of the smallest group. We use all three of the possible filters: for a transcript to be retained in the dataset, we require that (1) it has a count of at least 10 in at least n.small samples, (2) it has a relative abundance proportion of at least 0.1 in at least n.small samples, and (3) the total count of the corresponding gene is at least 10 in all n samples. We used all three possible filters, whereas only the two count filters are used in the DRIMSeq vignette example code.\n\nIt is important to consider what types of transcripts may be removed by the filters, and potentially adjust depending on the dataset. If n was large, it would make sense to allow perhaps a few samples to have very low counts, so lowering min_samps_gene_expr to some factor multiple (< 1) of n, and likewise for the first two filters for n.small. The second filter means that if a transcript does not make up more than 10% of the gene’s expression for at least n.small samples, it will be removed. If this proportion seems too high, for example, if very lowly expressed isoforms are of particular interest, then the filter can be omitted or the min_feature_prop lowered. For a concrete example, if a transcript goes from a proportion of 0% in the control group to a proportion of 9% in the treatment group, this would be removed by the above 10% filter. After filtering, this dataset has 7,764 genes.\n\n\n\n\n\nThe dmDSdata object only contains genes that have more that one isoform, which makes sense as we are testing for differential transcript usage. We can find out how many of the remaining genes have N isoforms by tabulating the number of times we see a gene ID, then tabulating the output again:\n\n\n\n\n\nWe create a design matrix, using a design formula and the sample information contained in the object, accessed via samples. Here we use a simple design with just two groups, but more complex designs are possible. For some discussion of complex designs, one can refer to the vignettes of the limma, edgeR, or DESeq2 packages.\n\n\n\n\n\nOnly for speeding up running the live code chunks in this workflow, we subset to the first 250 genes, representing about one thirtieth of the dataset. This step would not be run in a typical workflow.\n\n\n\n\n\nWe then use the following three functions to estimate the model parameters and test for DTU. We first estimate the precision, which is related to the dispersion in the Dirichlet Multinomial model via the formula below. Because precision is in the denominator of the right hand side of the equation, they are inversely related. Higher dispersion – counts more variable around their expected value – is associated with lower precision. For full details about the DRIMSeq model, one should read both the detailed software vignette and the publication7. After estimating the precision, we fit regression coefficients and perform null hypothesis testing on the coefficient of interest. Because we have a simple two-group model, we test the coefficient associated with the difference between condition 2 and condition 1, called condition2. The following code takes about half a minute, and so a full analysis on this dataset takes about 15 minutes on a laptop.\n\n\n\n\n\n\n\n\n\n\n\n\n\nTo build a results table, we run the results function. We can generate a single p-value per gene, which tests whether there is any differential transcript usage within the gene, or a single p-value per transcript, which tests whether the proportions for this transcript changed within the gene:\n\n\n\n\n\n\n\n\n\nBecause the pvalue column may contain NA values, we use the following function to turn these into 1’s. The NA values would otherwise cause problems for the stage-wise analysis.\n\n\n\nWe can plot the estimated proportions for one of the significant genes, where we can see evidence of switching (Figure 2).\n\n\n\n\n\n\n\nBecause we have been working with only a subset of the data, we now load the results tables that would have been generated by running DRIMSeq functions on the entire dataset.\n\n\n\n\n\n\n\n\n\nA typical analysis of differential transcript usage would involve asking first: “which genes contain any evidence of DTU?”, and secondly, “which transcripts in the genes that contain some evidence may be participating in the DTU?” Note that a gene may pass the first stage without exhibiting enough evidence to identify one or more transcripts that are participating in the DTU. The stageR package is designed to allow for such two-stage testing procedures, where the first stage is called a screening stage and the second stage a confirmation stage8. The methods are general, and can also be applied to testing, for example, changes across a time series followed by investigation of individual time points, as shown in the stageR package vignette. We show below how stageR is used to detect DTU and how to interpret its output.\n\nWe first construct a vector of p-values for the screening stage. Because of how the stageR package will combine transcript and gene names, we need to strip the gene and transcript version numbers from their Ensembl IDs (this is done by keeping only the first 15 characters of the gene and transcript IDs).\n\n\n\nWe construct a one column matrix of the confirmation p-values:\n\n\n\nWe arrange a two column data.frame with the transcript and gene identifiers.\n\n\n\nThe following functions then perform the stageR analysis. We must specify an alpha, which will be the overall false discovery rate target for the analysis, defined below. Unlike typical adjusted p-values or q-values, we cannot choose an arbitrary threshold later: after specifying alpha=0.05, we need to use 5% as the target in downstream steps. There are also convenience functions getSignificantGenes and getSignificantTx, which are demonstrated in the stageR vignette.\n\n\n\nThe final table with adjusted p-values summarizes the information from the two-stage analysis. Only genes that passed the filter are included in the table, so the table already represents screened genes. The transcripts with values in the column, transcript, less than 0.05 pass the confirmation stage on a target 5% overall false discovery rate, or OFDR. This means that, in expectation, no more than 5% of the genes that pass screening will either (1) not contain any DTU, so be falsely screened genes, or (2) contain a transcript with a transcript adjusted p-value less than 0.05 which does not participate in DTU, so contain a falsely confirmed transcript. The stageR procedure allows us to look at both the genes that passed the screening stage and the transcripts with adjusted p-values less than our target alpha, and understand what kind of overall error rate this procedure entails. This cannot be said for an arbitrary procedure of looking at standard gene adjusted p-values and transcript adjusted p-values, where the adjustment was performed independently.\n\nWe found that DRIMSeq was sensitive to detect DTU, but could exceed its false discovery rate (FDR) bounds, particularly on the transcript-level tests, and that a post-hoc, non-specific filtering of the DRIMSeq transcript p-values improved the FDR control. We considered the standard deviation (SD) of the per-sample proportions as a filtering statistic. This statistic does not use the information about which samples belong to which condition group. We set the p-values for transcripts with small per-sample proportion SD to 1 and then re-computed the adjusted p-values using the method of Benjamini and Hochberg31. Excluding transcripts with small SD of the per-sample proportions brought the observed FDR closer to its nominal target in the simulation considered here, as shown below.\n\n\n\nThe above post-hoc filter is not part of the DRIMSeq modeling steps, and to avoid interfering with the modeling, we run it after DRIMSeq. The other three filters used before have been tested by the DRIMSeq package authors, and are therefore a recommended part of an analysis before the modeling begins.\n\nThe DEXSeq package was originally designed for detecting differential exon usage32, but can also be adapted to run on estimated transcript counts, in order to detect DTU. Using DEXSeq on transcript counts was evaluated by Soneson et al.33, showing the benefits in FDR control from filtering lowly expressed transcripts for a transcript-level analysis. We benchmarked DEXSeq here, beginning with the DRIMSeq filtered object, as these filters are intuitive, they greatly speed up the analysis, and such filtering was shown to be beneficial in FDR control.\n\nThe two factors of (1) working on isoform counts rather than individual exons and (2) using the DRIMSeq filtering procedure dramatically increase the speed of DEXSeq, compared to running an exon-level analysis. Another advantage is that we benefit from the sophisticated bias models of Salmon, which account for drops in coverage on alternative exons that can otherwise throw off estimates of transcript abundance26. A disadvantage over the exon-level analysis is that we must know in advance all of the possible isoforms that can be generated from a gene locus, all of which are assumed to be contained in the annotation files (FASTA and GTF).\n\nWe first load the DEXSeq package and then build a DEXSeqDataSet from the data contained in the dmDStest object (the class of the DRIMSeq object changes as the results are added). The design formula of the DEXSeqDataSet here uses the language “exon” but this should be read as “transcript” for our analysis. DEXSeq will test – after accounting for total gene expression for this sample and for the proportion of this transcript relative to the others – whether there is a condition-specific difference in the transcript proportion relative to the others. The testing of “this” vs “others” in DEXSeq enables it to be much faster than its original published version, which involved fitting coefficients for each exon within a gene (here it would have been for each transcript within a gene).\n\n\n\nThe following functions run the DEXSeq analysis. While we are only working on a subset of the data, the full analysis for this dataset took less than 3 minutes on a laptop.\n\n\n\nWe then extract the results table, not filtering on mean counts (as we have already conducted filtering via DRIMSeq functions). We compute a per-gene adjusted p-value, using the perGeneQValue function, which aggregates evidence from multiple tests within a gene to a single p-value for the gene and then corrects for multiple testing across genes32. Other methods for aggregative evidence from the multiple tests within genes have been discussed in a recent publication and may be substituted at this step34. Finally, we build a simple results table with the per-gene adjusted p-values.\n\n\n\nFor size consideration of the workflow R package, we reduce also the transcript-level results table to a simple data.frame:\n\n\n\nAgain, as we have been working with only a subset of the data, we now load the results tables that would have been generated by running DEXSeq functions on the entire dataset.\n\n\n\nIf the stageR package has not already been loaded, we make sure to load it, and run code very similar to that used above for DRIMSeq two-stage testing, with a target alpha=0.05.\n\n\n\nThe following three functions provide a table with the OFDR control described above. To repeat, the set of genes passing screening should not have more than 5% of either genes which have in fact no DTU or genes which contain a transcript with an adjusted p-value less than 5% which do not participate in DTU.\n\n\n\n\n\nSUPPA2 is a command-line software package written in Python that also takes as input Salmon quantification, and so, for completeness, we also show example commands and evaluate its performance on the simulated data35. SUPPA2 offers a number of distinct features, including the ability to translate from Salmon transcript-level quantifications to individual splicing events, which are cataloged using a specific vocabulary described in the SUPPA2 software usage guide. SUPPA2 additionally offers differential analysis on the splicing events, which may be more valuable to investigators than per-transcript results, depending on the research goals (similar to the exon-level primary use case of DEXSeq).\n\nHere, as our DTU simulation involved switching between expressed transcripts without assessing whether they were separated by one or more splice events, and as the other two Bioconductor methods for detecting DTU involve transcript-level analysis, we ran SUPPA2 in its differential transcript usage mode. We chose to filter on transcripts with TPM larger than 1; TPM filtering is a command-line option available during the diffSplice step of SUPPA2 and this improves the running time. We did not use gene-correction, as we wanted to apply the aggregation and correction method perGeneQValue from DEXSeq to obtain an FDR bounded set of genes and transcripts as output. We did not perform the stage-wise analysis of SUPPA2 output, although this could be done by small modifications to the above code for either DRIMSeq or DEXSeq.\n\nWe used the following R code to prepare two files containing TPM estimates for each of the two groups, using the tximport object defined above:\n\n\n\nThe SUPPA2 example code can be found at the software homepage, but we include here the code used on the 6 vs 6 analysis. The first line generates a set of isoforms from the GTF file. The second and third line generate PSI (percent spliced in) estimates for each transcript from files containing the TPMs for each group. The final line performs the differential analysis.\n\n\n\nWe imported the analysis results into R:\n\n\n\nThe following line was used to compute transcript-level adjusted p-values. We noticed that SUPPA2 had a large gain in sensitivity, while still controlling its FDR, if the set of transcripts examined were limited to those that passed the DRIMSeq filtering steps above. Therefore, before running any multiple test correction steps, we filtered to this subset of transcripts. We assessed whether the TPM > 1 filtering step made a difference in the sensitivity and false discovery rate for SUPPA2 when combined with the DRIMSeq filtering; it did not.\n\n\n\nWe generated per-gene adjusted p-values, using perGeneQValue from DEXSeq:\n\n\n\nThis concludes the DTU section of the workflow. If you use DRIMSeq7, DEXSeq32, SUPPA235, stageR8, tximport12, or Salmon13 in published research, please cite the relevant methods publications, which can be found in the References section of this workflow.\n\n\nEvaluation of methods for DTU\n\nWe begin the evaluation by noting that all of the methods correctly avoided calling many of the DGE events as DTU events. The object dge.genes contains the names of all the genes in which all the isoforms were differentially expressed by an equal amount (so not DTU). SUPPA2 output is not included in the workflow, but it only reported one of the DGE genes as DTU out of 851 with an adjusted p-value less than 0.05.\n\nThe number of DGE genes called in DTU analysis with DRIMSeq:\n\n\n\n\n\nThe number of DGE genes called in DTU analysis with DEXSeq:\n\n\n\n\n\nThe iCOBRA package36 was used to construct plots to assess the true positive rate over the false discovery rate at three nominal FDR thresholds: 1%, 5%, and 10%. The code for evaluating all methods and constructing the iCOBRA plots is included in the simulation repository21. Above, we showed an analysis for a comparison of 6 vs 6 samples. As we were interested in the performance at various sample sizes, we performed the entire analysis for DRIMSeq, DEXSeq, and SUPPA2 at per-group sample sizes of 3, 6, 9, and 12.\n\nAt the gene level, in terms of controlling the nominal FDR, SUPPA2 always controlled its FDR, even for the smallest sample size, DEXSeq controlled except for the 1% threshold in the smallest sample size case, and DRIMSeq exceeded its FDR but approached the target for larger sample sizes (Figure 3). Exceeding the nominal FDR level by a small amount should be considered with a method’s relative sensitivity in mind as well, compared to other methods. For example, for the 6 vs 6 comparison, DRIMSeq had observed FDR of 12% at nominal 10%, meaning that for every 100 genes reported as containing DTU, the method reported 2 extra genes more than its target. DRIMSeq and DEXSeq were the most sensitive methods in recovering gene-level DTU in this simulation.\n\nTrue positive rate (y-axis) over false discovery rate (FDR) (x-axis) for DEXSeq, DRIMSeq, and SUPPA2. The four panels shown are for per-group sample sizes: (A) 3, (B) 6, (C) 9, and (D) 12. Circles indicate thresholds of 1%, 5%, and 10% nominal FDR, which are filled if the observed value is less than the target (dashed vertical lines).\n\nWe assessed the overall false discovery rate (OFDR) procedure implemented with stageR using gene- and transcript-level p-values from DRIMSeq and DEXSeq. For DRIMSeq, we assessed whether raising the p-values for transcripts with small proportion SD helped to recover OFDR control. DEXSeq input to stageR tended to stay within the 5% OFDR target, and the observed OFDR for DRIMSeq with proportion SD filtering lowered to around 15% at per-group sample size of 6 and higher (Figure 4). Without the filtering, the observed OFDR for DRIMSeq was otherwise around 25%.\n\nEach method is drawn as a line, and the numbers to the right of the points indicate the per-group sample size. Adjusted p-values for a nominal 5% OFDR (dashed vertical line) were generated for DEXSeq and DRIMSeq (with and without post-hoc filtering) from gene- and transcript-level p-values using the stageR framework for stage-wise testing.\n\nFinally, we assessed the transcript-level adjusted p-values for DTU directly from DRIMSeq, DEXSeq, and SUPPA2. This analysis did not use stageR for stage-wise testing, and so we compute the standard FDR, where the unit of false discovery is the transcript, in contrast to the OFDR where the unit of false discovery is the gene. In general, we recommend using the stageR results, as it allows error control on a natural procedure of looking across genes, then within genes for which transcripts participate in DTU. SUPPA2 again tended to control its FDR, as did DEXSeq (Figure 5). DRIMSeq with proportion SD filtering approached the target FDR as sample size increased for the 5% and 10% targets, while without filtering, the observed FDR was always higher than the target.\n\nTrue positive rate (y-axis) over false discovery rate (x-axis) for DEXSeq, DRIMSeq (with and without post-hoc filtering), and SUPPA2. The four panels shown are for per-group sample sizes: (A) 3, (B) 6, (C) 9, and (D) 12. Circles indicate thresholds of 1%, 5%, and 10% nominal FDR.\n\nIn Table 1 we include the timing for each method at various sample sizes. Timing includes only the diffSplice step of SUPPA2 (the other steps take less than a minute). For DRIMSeq and DEXSeq, we include the timing of the estimation steps (importing counts with tximport and filtering takes only a few seconds).\n\n\nEvaluation with fixed per-gene dispersion\n\nIn order to further investigate performance differences between DRIMSeq and DEXSeq, we generated an additional simulation in which genes were assigned Negative Binomial dispersion parameters by matching the gene-level count to the joint distribution of mean and dispersions on the GEUVADIS dataset. Then transcript-level counts were generated with all transcripts of a gene being assigned the same Negative Binomial dispersion parameter. This contrasts with the main simulation, in which each transcript was assigned its own dispersion parameter, resulting in heterogeneity of dispersion within a gene. As we do not know the degree to which transcripts of a gene would have correlated biological variability in an experimental dataset, we also include the results for the count-based methods that estimate precision/dispersion, DRIMSeq and DEXSeq on this additional simulation.\n\nDRIMSeq, which estimates a single precision parameter per gene, performed slightly better on this simulation at the gene level (Figure 6), although we note that DRIMSeq nearly controlled FDR at the gene level already in the main simulation. DEXSeq models different dispersion parameters for every transcript, and its performance changes less across the two simulations. More improvement was seen for DRIMSeq with proportion SD filtering, in the OFDR analysis (Figure 7) and in the transcript-level analysis without screening (Figure 8). Again, we caveat our comparative evaluation of DRIMSeq and DEXSeq by noting that we do not know whether various real RNA-seq experiments will more closely reflect within-gene heterogeneous dispersion or fixed dispersion, or something in between.\n\nThe four panels shown are for per-group sample sizes: (A) 3, (B) 6, (C) 9, and (D) 12. Circles indicate thresholds of 1%, 5%, and 10% nominal FDR.\n\nThe four panels shown are for per-group sample sizes: (A) 3, (B) 6, (C) 9, and (D) 12. Circles indicate thresholds of 1%, 5%, and 10% nominal FDR.\n\n\nDTU analysis complements DGE analysis\n\nIn the final section of the workflow containing live code examples, we demonstrate how differential transcript usage, summarized to the gene-level, can be visualized with respect to differential gene expression analysis results. We use tximport and summarize counts to the gene level and compute an average transcript length offset for count-based methods12. We will then show code for using DESeq2 and edgeR to assess differential gene expression. Because we have simulated the genes according to three different categories, we can color the final plot by the true simulated state of the genes. We note that we will pair DEXSeq with DESeq2 results in the following plot, and DRIMSeq with edgeR results. However, this pairing is arbitrary, and any DTU method can reasonably be paired with any DGE method.\n\nThe following line of code is unevaluated, but was used to generate an object txi.g which contains the gene-level counts, abundances and average transcript lengths.\n\n\n\nFor the workflow, we load the txi.g object which is saved in a file salmon_gene_txi.rda. We then load the DESeq2 package and build a DESeqDataSet from txi.g, providing also the sample information and a design formula.\n\n\n\n\n\nThe following two lines of code run the DESeq2 analysis16.\n\n\n\nWe can confirm that most of the DTU genes are correctly not included in the significant DGE results (although some are).\n\n\n\n\n\n\n\n\n\nBecause we happen to know the true status of each of the genes, we can make a scatterplot of the results, coloring the genes by their status (whether DGE, DTE, or DTU by construction).\n\n\n\n\n\n\n\nFigure 9 displays the evidence for differential transcript usage over that for differential gene expression. We can see that the DTU genes cluster on the y-axis (mostly not captured in the DGE analysis), and the DGE genes cluster on the x-axis (mostly not captured in the DTU analysis). The DTE genes fall in the middle, as all of them represent DGE, and some of them additionally represent DTU (if the gene had other expressed transcripts). Because DEXSeq outputs an adjusted p-value of 0 for some of the genes, we set these instead to a jittered value around 10−20, so that their number and location on the x-axis could be visualized. These jittered values should only be used for visualization.\n\n\n\nEach point represents a gene, and plotted are -log10 adjusted p-values for DEXSeq’s test of differential transcript usage (y-axis) and DESeq2’s test of differential gene expression (x-axis). Because we simulated the data we can color the genes according to their true category.\n\nWe can repeat the same analysis using edgeR as the inference engine3. The following code incorporates the average transcript length matrix as an offset for an edgeR analysis.\n\n\n\nThe basic edgeR model fitting and results extraction can be accomplished with the following lines:\n\n\n\nWe confirm that most of the DTU genes are correctly not reported as DGE:\n\n\n\n\n\nAgain, we can color the genes by their true status in the simulation:\n\n\n\nFigure 10 displays the evidence for differential transcript usage over that for differential gene expression, now using DRIMSeq and edgeR. One obvious contrast with Figure 9 is that DRIMSeq outputs lower non-zero adjusted p-values than DEXSeq does, where DEXSeq instead outputs 0 for many genes. The plots look more similar when zooming in on the DRIMSeq y-axis, as can be seen in Figure 11.\n\n\n\n\n\n\nEvaluation of methods for DGE\n\nWe additionally assessed Bioconductor and other R packages for differential gene expression, to determine true positive rate and control of false discovery rate on the simulated dataset. In this analysis, the simulated “DTE” genes (where a single transcript was chosen to be differentially expressed) should count for differential gene expression, while the simulated “DTU” genes should not, as the total expression of the gene remains constant.\n\nWe compared DESeq216, EBSeq37, edgeR3, edgeR-QL (using the quasi-likelihood functions)38, limma with voom transformation6, SAMseq39, and sleuth40. We used tximport to summarize Salmon abundances to the gene level, and provided all methods other than DESeq2 and sleuth with the lengthScaledTPM count matrix. sleuth takes as input the quantification from kallisto15, which was run with 30 bootstrap samples and bias correction. For gene-level analysis in sleuth, the argument aggregation_column=\"gene_id\" was used. As DESeq2 has specially designed import functions for taking in estimated gene counts and an offset from tximport, we used this approach to provide Salmon summarized gene-level counts and an offset. edgeR and edgeR-QL had the same performance using the counts and offset approach or the lengthScaledTPM approach, so we used the latter for code simplicity. The exact code used to run the different methods can be found at the simulation code repository21. Timings for the different gene-level methods are presented in Table 2.\n\nTiming includes data import and summarization to gene-level quantities using one core.\n\niCOBRA plots with true positive rate over false discovery rate for gene-level analysis across four different per-group sample sizes are presented in Figure 12. For the smallest per-group sample size of 3, all methods except DESeq2 and EBSeq tended to control the FDR, while those two method had, for example, 15% FDR at the nominal 10% rate. SAMseq, with so few samples, did not have any sensitivity to detect DGE. At the per-group sample size of 6, all methods except DESeq2 and SAMseq tended to control the FDR. At this sample size, EBSeq controlled its FDR. For the largest per-group sample sizes, 9 and 12, the performance of many methods remained similar as previously, except sleuth did not control the nominal 5% or 10% FDR. We performed additional experiments to see if the performance of sleuth at higher sample sizes was related to the realistic GC bias parameters used in the simulation, but simulating fragments uniformly from the transcripts revealed the same performance at per-group sample sizes 9 and 12 (Supplementary Figure 2). Reducing the number of DGE, DTE and DTU genes from 10% to 5% each, however, did recover control of the FDR at the nominal 5% and 10% FDR for sleuth (Supplementary Figure 3).\n\n\nEvaluation of methods for DTE\n\nFinally, we assessed the Bioconductor and R packages for differential transcript expression analysis. While we believe the separation of differential transcript usage and differential gene expression described in the earlier sections of the workflow represents an easily interpretable approach, some investigators may prefer to assess differential expression on a per-transcript basis. For this assessment, all of the simulated non-null transcripts count as DTE, whether from the simulated DGE-, DTE-, or DTU-by-construction genes. For most of the methods, we simply provided the transcript-level data to the same functions as for the DGE analysis. EBSeq was provided with the number of isoforms per gene. The timing of the methods is presented in Table 3.\n\nTiming includes data import.\n\niCOBRA plots with the true positive rate over false discovery rate for the transcript-level analysis are shown in Figure 13. The performance at per-group sample size of 3 was similar to the gene-level analysis, except DESeq2 came closer to controlling the FDR and EBSeq performed slightly worse than before, while the rest of the methods tended to control their FDR. At per-group sample size of 6, all of the evaluated methods tended to control the FDR, though DESeq2, EBSeq, SAMseq, and sleuth tended to have higher sensitivity than edgeR, edgeR-QL and limma. The same issue of FDR control for sleuth was seen in the transcript-level analysis as in the gene-level analysis, for per-group sample size 9 and 12.\n\n\nDiscussion\n\nHere we presented a workflow for analyzing RNA-seq experiments for differential transcript usage across groups of samples. The Bioconductor packages used, DRIMSeq, DEXSeq, and stageR, are simple to use and fast when run on transcript-level data. We show how these can be used downstream of transcript abundance quantification with Salmon. We evaluated these methods on a simulated dataset and showed how the transcript usage results complement a gene-level analysis, which can also be run on output from Salmon, using the tximport package to aggregate quantification to the gene level. We used the simulated dataset to evaluate Bioconductor and other R packages for differential gene expression, and differential transcript expression. We recommend the use of stageR for its formal statistical procedure involving a screening and confirmation stage, as this fits closely to what we expect a typical analysis to entail. stageR then provides error control for an overall false discovery rate, assuming that the underlying tests are well calibrated.\n\nOne potential limitation of this workflow is that, in contrast to other methods such as the standard DEXSeq analysis, SUPPA2, or LeafCutter41, here we considered and detected expression switching between annotated transcripts. Other methods such as DEXSeq (exon-based), SUPPA2, or LeafCutter may benefit in terms of power and interpretability from performing statistical analysis directly on exon usage or splice events. Methods such as DEXSeq (exon-based) and LeafCutter benefit in the ability to detect un-annotated events. The workflow presented here would require further processing to attribute transcript usage changes to specific splice events, and is limited to considering the estimated abundance of annotated transcripts.\n\n\nSession information\n\nThe following provides the session information used when compiling this document.\n\n\n\n\n\n\nSoftware versions\n\nThe statistical methods were evaluated using the following software versions: DRIMSeq - 1.8.0, DEXSeq - 1.26.0, stageR - 1.2.21, tximport - 1.8.0, DESeq2 - 1.20.0, EBSeq - 1.20.0, edgeR - 3.22.2, limma - 3.36.1, samr - 2.0, sleuth - 0.29.0, SUPPA2 - 2.3. The samples were quantified with Salmon version 0.10.0 and kallisto version 0.44.0. polyester version 1.16.0 and alpine version 1.6.0 were used in generating the simulated dataset.\n\n\nData availability\n\nThe simulated paired-end read FASTQ files have been uploaded in three batches of eight samples each to Zenodo -\n\nhttps://doi.org/10.5281/zenodo.129137522\n\nhttps://doi.org/10.5281/zenodo.129140423\n\nhttps://doi.org/10.5281/zenodo.129144324\n\nThe quantification files are also available as a separate Zenodo dataset - https://doi.org/10.5281/zenodo.129152225\n\nThe scripts used to generate the simulated dataset are available at the simulation GitHub repository (https://github.com/mikelove/swimdown/tree/v1.0) and archived here - https://doi.org/10.5281/zenodo.129389921. All data is available under a CC BY 4.0 license.\n\n\nSoftware availability\n\n1. All software used in this workflow is available as part of Bioconductor version 3.7.\n\n2. Source code for the workflow: https://github.com/mikelove/rnaseqDTU\n\n3. Link to archived source code as at time of publication: https://doi.org/10.5281/zenodo.129391442\n\n4. License: Artistic-2.0",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work of MIL on this workflow was supported by the National Human Genome Research Institute [R01 HG009125], the National Cancer Institute [P01 CA142538], and the National Institute of Environmental Health Sciences [P30 ES010126]. CS declared that no grants were involved in supporting this work. The work of RP on this workflow was supported by the National Science Foundation [BIO-1564917 and CCF-1750472].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors thank Koen Van den Berge and Malgorzata Nowicka for helpful comments on the workflow.\n\n\nSupplementary material\n\nSupplementary File 1 - PDF file containing the following supplementary figures -\n\nClick here to access the data.\n\nSupplementary Figure 1: Dispersion-over-mean comparison plot produced by countsimQC. The left panel shows DESeq2 estimates of dispersion per gene over the mean of normalized counts from the GEUVADIS project, provided by the Recount2 project (n = 458 non-duplicated samples). The right panel shows estimates of dispersion per transcript over the mean of normalized counts for Salmon estimated transcript counts for the simulated dataset (the 12 vs 12 comparison), showing only the transcripts where the mean of counts over samples was greater than 5. Black points indicate maximum likelihood estimates (Cox-Reid adjusted), blue points indicate posterior estimates, and the red line indicates the parametric trend line. Points at the bottom of the plot indicate maximum likelihood estimates of 10-8. The design formula included sequencing center and population for GEUVADIS, and the condition variable for the simulated dataset. The simulation dataset was constructed by drawing mean and dispersions parameters from the joint distribution of the estimates from the GEUVADIS project. The full countsimQC report can be found at https://github.com/mikelove/swimdown/tree/master/countsimqc.\n\nSupplementary Figure 2: We performed additional experiments to assess the false discovery rate (FDR) control for sleuth at per-group sample size of 9 (left column) and 12 (right column), at the gene-level (top row) and the transcript-level (bottom row). To determine whether the excess observed FDR was due to the inclusion of realistic fragment GC coverage in the main simulation, for this experiment fragments were instead drawn uniformly from positions on the transcripts. The dispersion-mean relationship was kept the same, drawing from the joint distribution of estimates on the GEUVADIS dataset (n = 458).\n\nSupplementary Figure 3: As in Supplementary Figure 2, shown is the result of an additional experiment to assess the false discovery rate (FDR) control for sleuth for the two largest sample sizes in the simulation. For this experiment, realistic fragment GC bias was used in the simulation, but the percent of genes with DGE, DTE and DTU was lowered from 10% to 5% each. This modication of the simulation helped to regain control of FDR for sleuth.\n\n\nReferences\n\nGlaus P, Honkela A, Rattray M: Identifying differentially expressed transcripts from RNA-seq data with biological variation. Bioinformatics. 2012; 28(13): 1721–1728. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnders S, Reyes A, Huber W: Detecting differential usage of exons from RNA-seq data. Genome Res. 2012; 22(10): 2008–2017. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCarthy DJ, Chen Y, Smyth GK: Differential expression analysis of multifactor RNA-seq experiments with respect to biological variation. Nucleic Acids Res. 2012; 40(10): 4288–4297. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmyth GK: Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004; 3(1): Article3. PubMed Abstract | Publisher Full Text\n\nLaw CW, Chen Y, Shi W, et al.: Voom: Precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014; 15(2): R29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNowicka M, Robinson MD: DRIMSeq: a Dirichlet-multinomial framework for multivariate count outcomes in genomics [version 2; referees: 2 approved]. F1000Res. 2016; 5: 1356. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan den Berge K, Soneson C, Robinson MD, et al.: stageR: a general stage-wise method for controlling the gene-level false discovery rate in differential expression and differential transcript usage. Genome Biol. 2017; 18(1): 151. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoldstein LD, Cao Y, Pau G, et al.: Prediction and Quantification of Splice Events from RNA-Seq Data. PLoS One. 2016; 11(5): e0156132. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVitting-Seerup K, Sandelin A: The landscape of isoform switches in human cancers. Mol Cancer Res. 2017; 15(9): 1206–1220. PubMed Abstract | Publisher Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoneson C, Love MI, Robinson MD: Differential analyses for RNA-seq: transcript-level estimates improve gene-level inferences [version 2; referees: 2 approved]. F1000Res. 2016; 4: 1521. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatro R, Duggal G, Love MI, et al.: Salmon provides fast and bias-aware quantification of transcript expression. Nat Methods. 2017; 14(4): 417–419. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatro R, Mount SM, Kingsford C: Sailfish enables alignment-free isoform quantification from RNA-seq reads using lightweight algorithms. Nat Biotechnol. 2014; 32(5): 462–464. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBray NL, Pimentel H, Melsted P, et al.: Near-optimal probabilistic RNA-seq quantification. Nat Biotechnol. 2016; 34(5): 525–527. PubMed Abstract | Publisher Full Text\n\nLove MI, Huber W, Anders S: Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014; 15(12): 550. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrapnell C, Hendrickson DG, Sauvageau M, et al.: Differential analysis of gene regulation at transcript resolution with RNA-seq. Nat Biotechnol. 2013; 31(1): 46–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeller R, Manduchi E, Grant GR, et al.: A flexible two-stage procedure for identifying gene sets that are differentially expressed. Bioinformatics. 2009; 25(8): 1019–25. PubMed Abstract | Publisher Full Text\n\nLappalainen T, Sammeth M, Friedländer MR, et al.: Transcriptome and genome sequencing uncovers functional variation in humans. Nature. 2013; 501(7468): 506–511. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCollado-Torres L, Nellore A, Kammers K, et al.: Reproducible RNA-seq analysis using recount2. Nat Biotechnol. 2017; 35(4): 319–321. PubMed Abstract | Publisher Full Text\n\nLove MI: Scripts used in constructing and evaluating the simulated data for Swimming Downstream. 2018. Data Source\n\nLove MI: Simulation data (1) for Swimming Downstream: pairs of samples 1-4. 2018. Data Source\n\nLove MI: Simulation data (2) for Swimming Downstream: pairs of samples 5-8. 2018. Data Source\n\nLove MI: Simulation data (3) for Swimming Downstream, pairs of samples 9-12. 2018. Data Source\n\nLove MI: Quantification files for Swimming Downstream. 2018. Data Source\n\nLove MI, Hogenesch JB, Irizarry RA: Modeling of RNA-seq fragment sequence bias reduces systematic errors in transcript abundance estimation. Nat Biotechnol. 2016; 34(12): 1287–1291. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrazee AC, Jaffe AE, Langmead B, et al.: Polyester: simulating RNA-seq datasets with differential transcript expression. Bioinformatics. 2015; 31(17): 2778–2784. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoneson C, Robinson MD: Towards unified quality verification of synthetic count data with countsimQC. Bioinformatics. 2018; 34(4): 691–692. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKöster J, Rahmann S: Snakemake--a scalable bioinformatics workflow engine. Bioinformatics. 2012; 28(19): 2520–2522. PubMed Abstract | Publisher Full Text\n\nDi Tommaso P, Chatzou M, Floden EW, et al.: Nextflow enables reproducible computational workflows. Nat Biotechnol. 2017; 35(4): 316–319. PubMed Abstract | Publisher Full Text\n\nBenjamini Y, Hochberg Y: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc Series B Stat Methodol. 1995; 57(1): 289–300. Reference Source\n\nAnders S, Reyes A, Huber W: Detecting differential usage of exons from RNA-seq data. Genome Res. 2012; 22(10): 2008–2017. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoneson C, Matthes KL, Nowicka M, et al.: Isoform prefiltering improves performance of count-based methods for analysis of differential transcript usage. Genome Biol. 2016; 17(1): 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYi L, Pimentel H, Bray NL, et al.: Gene-level differential analysis at transcript-level resolution. Genome Biol. 2018; 19(1): 53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrincado JL, Entizne JC, Hysenaj G, et al.: SUPPA2: fast, accurate, and uncertainty-aware differential splicing analysis across multiple conditions. Genome Biol. 2018; 19(1): 40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoneson C, Robinson MD: iCOBRA: open, reproducible, standardized and live method benchmarking. Nat Methods. 2016; 13(4): 283. PubMed Abstract | Publisher Full Text\n\nLeng N, Dawson JA, Thomson JA, et al.: EBSeq: an empirical Bayes hierarchical model for inference in RNA-seq experiments. Bioinformatics. 2013; 29(8): 1035–1043. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLund SP, Nettleton D, McCarthy DJ, et al.: Detecting differential expression in RNA-sequence data using quasi-likelihood with shrunken dispersion estimates. Stat Appl Genet Mol Biol. 2012; 11(5): pii: /j/sagmb.2012.11.issue-5/1544-6115.1826/1544-6115.1826.xml. PubMed Abstract | Publisher Full Text\n\nLi J, Tibshirani R: Finding consistent patterns: A nonparametric approach for identifying differential expression in RNA-seq data. Stat Methods Med Res. 2013; 22(5): 519–536. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPimentel H, Bray NL, Puente S, et al.: Differential analysis of RNA-seq incorporating quantification uncertainty. Nat Methods. 2017; 14(7): 687–690. PubMed Abstract | Publisher Full Text\n\nLi YI, Knowles DA, Humphrey J, et al.: Annotation-free quantification of RNA splicing using LeafCutter. Nat Genet. 2018; 50(1): 151–158. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLove MI, Soneson C, Patro R: Swimming downstream: statistical analysis of differential transcript usage following Salmon quantification. 2018. Data Source"
}
|
[
{
"id": "35548",
"date": "24 Jul 2018",
"name": "Kristoffer Vitting-Seerup",
"expertise": [
"Reviewer Expertise Bioinformatics with a focus on isoform usage analysis."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSummary In “Swimming downstream: statistical analysis of differential transcript usage following Salmon quantification” Love et al presents a combined workflow and benchmark for differential transcript usage. This is a vital paper as there is no consensus on which differential transcript usage tools works better (here addressed by the benchmark part) and very few people analyze differential transcript usage – something the workflow can hopefully help with. Of special note is the extent to which open source have been embraced by Love et al – an approach that is commendable (and copy worthy). Although the manuscript has a lot of potential it can, in its current form, be challenging to read and the benchmark of differential transcript usage part needs to be extended. Revisions are therefore required.\nPreface\nMalte Thodeberg helped me review this paper – thanks Malte! Since neither of us are native English speakers/writers we have not attempted to corrected for potential gramma and/or spelling mistakes I'm the developer of IsoformSwitchAnalyzeR.\n\nGeneral comments\nThe article switches between describing a workflow, which users can follow to perform differential transcript usage on their own data, and a benchmark of differential expression/usage tools. The two sections should be much more clearly separated and each should be more concisely written.\nOne solution would be to have the benchmark first and the workflow afterwards. It would then be natural that workflow used the tool(s) deemed better by the benchmark.\n\nThe main problem with the workflow part of the manuscript is the intermixing of the workflow and benchmarking (and the intro/methods) sections which makes it necessary to include a lot of callouts, omissions and special cases. This has the unintended effect of cluttered the workflow making it hard to read and/or follow. This would however be solved by the above suggested re-structuring. If such restructure were implemented it would also seem more natural that the workflow consistently only use a small dataset (either a subset of the simulated data or another dataset entirely) whereby the workflow could be simplified a lot. Although the benchmark is of high quality it still needs to be a bit more exhaustive. (Even with the suggested re-structure) The whole article would highly benefit from an overview paragraph and/or figure to give the reader the high-level overview of the outline before jumping into it (something like a table/figure/description of content). This could also be a table of content (with links included to enable easy jumping in the article).\n\nTitle\nThe title should reflect it is a workflow and/or benchmark. The current title suggests the authors developed a new tool for differential transcript usage which were specifically designed to integrate with Salmon. Furthermore, it could be considered to change the title so it also indicates the differential gene/transcript expression performed in the manuscript.\n\nIntroduction\nThe introduction lacks a section describing why differential transcript usage are of interest in the first place. Large parts of what would normally be in the introduction and methods have been moved into the results. Introduction to tools and methods including descriptions of how they work belongs in the introduction. Description of parameter choice for e.g. scaling during tximport also belongs in intro/methods.\nOptional suggestion: include a lay-man introduction to how the tools work (the technical part are in the original papers for people interested).\n\nIn the section where tools for DTU are mention please remove (or argue for inclusion of) BITSeq and stageR. StageR is for post analysis of p-values (no test). Although BITSeq is mentioned in some of the BiocViews of alternative splicing neither the article nor the vignette shows anything but DTE (aka no DTU). Mention that SGSseq wraps DEXSeq. The test build into IsoformSwitchAnalyzeR in not rank-based – but it is obsolete and will be removed from the next update – so it could be skipped entirely (along with the other non-maintained tests). Please reference IsoformSwitchAnalyzeR for its main purpose: the downstream analysis of functional consequences of identified isoform switches. Consider also mentioning other tools for downstream analysis (some can be found at https://www.bioconductor.org/packages/devel/BiocViews.html#___AlternativeSplicing ). To be more user-friendly please insert a link when mentioning the IsoformSwitchAnalyzeR vignette.\n\nMethods\nPlease add in the number of transcripts considered expressed (>= 10 estimated fragment counts) The simulations performed should either be named or numbered to allow for clear reference to which of the simulated datasets are used. In the countSimRepport please compare the simulated data to the 12 samples which were used for the basis of the simulation (comparing 12 to hundreds of samples is not easy to interpret). Please elaborate on discussion of the different options for scaling-from-TPM-to-counts. It is unclear what the difference is and when it matters. Furthermore you write “if we used lengthScaledTPM transcript counts, then a change in transcript usage among transcripts of different length could result in a changed total count for the gene, even if there is no change in total gene expression” is there a mixup here? If not, why do you then use lengthScaledTPM in the DGE/DTU section? Please include a recommendation of when to use which option for analysis of DGE/DTE, DTU and if both are present in the data. Modifications\nInclude a paragraph on quantification before introducing the modifications. If any expression filtering was done (as fig 1 indicate and mention above) it should be clearly stated. Currently it is unclear how many genes were modified in which way. To remedy that please provide a table indicating the number genes modified for DTU or DGE by each of the changes you introduce (as well as the total number of genes modified. Why both simulate DTU with a modification of a single isoform and a switch of two isoforms if you are not investigating whether it makes a difference - seems redundant? (more on that in the DGE benchmark).\n\nIn the workflow\nPlease add a comment of why DRIMSeq have NA as p-values (that will confuse many people)\n\nPost-hoc filtering on DRIMSeq\nWhat is the reasoning beheading this filtering step? And is it statistically valid to do this filtering – the proportions and p-values are not independent. Is the modified p-value distribution still uniform in the interval [0.05-1[ enabling proper FDR correction? If the filtering is statistically sound why not also do it for the other methods?\n\nEvaluation of methods for DTU. This is the major selling point of the article and the part that require most work.\nTo reflect a very common-use case scenarios the benchmark should also be formed with 2 replicates. Since the benchmark presented here show quite subtle differences (in TPR vs FDR) between 9 and 12 replicates the 2-replicate scenario could for replace either of them. The benchmark simulation should not only be performed once (one time) as the exact samples used in that run will have a large effect (especially for the smaller comparisons). Instead 25 simulations should be performed and the average iCOBRA plot could be shown (possibly extended to also show variation across the simulations). The benchmark must also include a run on unmodified simulated data to test how many false positives are found if there truly are no DTU (which might be the case for some datasets). Be consistent and concise in the use of stageR. Either use with no tools or use with all tools (or both to also enable a benchmark of stageR). Else the transcript level FDR between tools are not comparable). Highlight the difference between perGeneQValue and stageR (or only use one of them) or highlight where each is used. For example, it is not clear whether stageR was used in figure 3 and if it was whether it was for all tools. Given the success of repurposing DEXSeq to DTU, and the good performance of limma for DTE/DGE, the current benchmark could also test a repurposing of limma’s (and edgeR’s) differential exon usage test. This is optional – but it would be a huge step forward for testing differential isoform usage as it would bring a lot of clarity to the field. Use same axis for the 4 iCOBRA plots to illustrate improvement with increasing number of samples. Please include group sizes (e.g. 3 vs 3, 6 vs 6 etc.) in the figure to make it easier to read - could be instead of the rather uninformative “overall” facet title. Please comment:\nOn the large performance increase from “Kallisto + DEXSeq” in Soneson el al, Genome Biology 2016 (where FDR performance was quite poor) to the current “Salmon + DEXSeq” which performs rather good. On the differences between your benchmark (indicating DEXSeq works better) and the benchmark performed by Nowicka et al in the DRIMSeq paper (indicating DRIMSeq) works better.\n\nPlease move the evaluation with fixed per-gene dispersion to supplementary material as it is just a sanity check. Please end section with a recommendation of what tool to use.\n\nEvaluation of DTU vs DGE\nThis section belongs in the workflow part of the article.\n\nEvaluation of DGE/DTE\nThe reason for (re)doing a DGE/DTU benchmark here need to be clearly described (which is to test how tools perform when there are also underlying DTU as hinted in Soneson 2016, F1000Research). To reflect a very common-use case scenarios the benchmark should also be formed with 2 replicates. The 2-replicate scenario could replace either the 9 or 12 replicates Table with runtime should be moved to supplementary as it can be summarized as “sleuth is slower”. The TPR vs FDR figures are unreadable due to too many lines on top of one another – this must be fixed. Furthermore, use same axis for the 4 iCOBRA plots to show improvement with increasing number of samples. Please include group sizes in the figure to make it easier to read - could be instead of the “overall” facet title. The DGE results are quite surprising – in other recent benchmarks most tools handle FDR quite well – which is not the case here.\nI suspect this might be due to the DGE where only a single isoform was changed (meaning the overall gene expression could change only marginally). Therefore, the authors should investigate how the benchmark result differ when only considering either the DGE introduce with one isoform upregulated or the DGE with all isoforms were upregulated. If the results hold op a comment on how this compare to recent DGE benchmarks is necessary\n\nIf the problem rather seems to be the presence of DTU this should be highlighted and discussed. For figure S2 please include the sleuth result on the main simulated data as well else a direct comparison (to judge the effect of the GC content) is not feasible Please end section with a recommendation of what tools to use.\n\nDiscussion\nThere also needs to be a discussion around the benchmark part of the paper – it is currently completely missing.\n\nPlease don't hesitate to contact me if anything was unclear.\n\nIs the rationale for developing the new method (or application) clearly explained? Partly\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3965",
"date": "14 Sep 2018",
"name": "Michael Love",
"role": "Author Response",
"response": "We thank all reviewers for their insightful comments and suggestions that we feel have greatly improved the readability and usefulness of the workflow. We summarize the main changes and then address reviewer-specific comments point-by-point: We have addressed all minor text or grammatical suggestions by the reviewers. We have re-organized the article into distinct and more separated Workflow and Evaluation sections, which was suggested by all reviewers. We begin the article with a clear outline, titled: \"Structure of this article\", which outlines the Workflow part and the Evaluation part. This outline has direct links to relevant sections and subsections which follow. We have also included an overview diagram of the methods and packages included in the Workflow section, and how they are interconnected. We have added to the Introduction more motivational text on why a DTU analysis is relevant for biology and biomedical research. We have added a large section describing the methods DEXSeq and DRIMSeq, before the Workflow section. We have expanded the original sections discussing counts-from-abundance and their use in the workflow, to make our use of the tximport method more clear. For the DEXSeq section, we have corrected an earlier incorrect use of nbinomLRT(), which is now replaced with the correct testForDEU(). The practical result is that DEXSeq performs somewhat less conservatively, but the original code was incorrect, and the fix is necessary. The incorrect use of nbinomLRT() in this context will now produce an error in future releases of Bioconductor, to avoid possible incorrect usage. We have added RATs to the DTU Evaluation. We now apply stageR to all DTU methods that are evaluated: DRIMSeq, DEXSeq, RATs, and SUPPA2. The RATs and SUPPA2 methods are described, but the code is not provided, as these packages are not part of the Workflow. We use consistent x-axes and y-axes whenever possible, and use PDF instead of JPG to reduce compression artifacts. When a consistent x-axis is not used in the main text, we include Supplementary Figures with the same plots with outlying methods dropped to keep the x-axis consistent. We use a palette in which colors are more discernable for color-blind readers In the Evaluation sections, we include additional plots which examine the simulated gene type source of false positives for the DTU, DGE, and DTE analyses. We added a new evaluation to examine performance differences between DRIMSeq and DEXSeq, using the identical simulated data that was used in Soneson et al (2016) and Nowicka and Robinson (2016). We have added a 2 vs 2 simulation for the DTU Evaluation. We added a brief overview description of all methods assessed in the DGE and DTE Evaluations. We have added more recommendations in the Discussion. Reviewer-specific comments: General comments We believe we have made the separation between Workflow and Evaluation much more clear now, and have added an outline to the beginning of the article with hyperlinks to subsections and with an overview diagram, as usefully suggested here. Title We believe the title is appropriate and does not suggest a new tool. The fact that existing tools are leveraged in the workflow is clear from the abstract and the main text. Introduction The Bioconductor workflows do not have typical structure with Introduction, Methods, Results and Discussion, but instead a prolonged section where relevant concepts are typically introduced as needed. See, for example, the DESeq2 workflow: https://bioconductor.org/packages/rnaseqGene. We have now added overview descriptions of the methods DEXSeq and DRIMSeq before the Workflow section begins. We have removed BitSeq. We believed earlier that cjBitSeq, which is a new DTU method, was implemented in the Bioconductor package BitSeq, but it is a separate GitHub package (https://github.com/mqbssppe/cjBitSeq). Since we are listing Bioconductor packages that can be used for DTU, we now do not list BitSeq. We now have a separate sentence describing stageR and its connection to the DTU methods, and SGSeq (and we mention its leveraging of DEXSeq or limma). We no longer mention the statistical test from Vitting-Seerup and Sandelin (2017). We use the suggested purpose description for IsoformSwitchAnalyzeR, link to the AlternativeSplicing BiocViews, and include a link to the IsoformSwitchAnalyzeR vignette. Methods We now include the number of transcripts with estimated counts greater than 10 in the Simulation. We name the various simulations, and use their name when referring to them in the main text or captions. Our purpose in using the countsimQC report is to compare the joint distribution of estimated parameters (mean, dispersion) from the simulation and from the dataset from which the estimates were derived. We therefore compare the 24 simulated samples to the 458 non-duplicated GEUVADIS samples that were used for the estimation of the mean and dispersion parameters. We have made this more clear in the caption of the countsimQC Supplementary Figure. We have elaborated on discussion of the different options for counts-from-abundance, including the sentence about change in total counts. We include details on the recommended counts-from-abundance options through the text and in the overview diagram, Figure 1. We state whenever any expression filtering was done. The only expression filtering in the DTU section is performed by the filtering functions in DRIMSeq, and the TPM > 1 filter to speed up SUPPA2 on the command line. We mention the various expression filters used by the different DGE and DTE methods in the Evaluation section for those methods. We include in the Simulation section the exact number of genes modified by simulated DGE, simulated DTE, and simulated DTU. We have added a comment on the NA p-values for DRIMSeq in the section in the workflow where they are replaced with a p-value of 1. The text now reads: \"From investigating these NA p-value cases for DRIMSeq, they all occur when one condition group has all zero counts for a transcript, but sufficient counts from the other condition group, and sufficient counts for the gene. DRIMSeq will not estimate a precision for such a gene. These all happen to be true positive genes for DTU in the simulation, where the isoform switch is total or nearly total. DEXSeq, shown in a later section, does not produce NA p-values for any genes. A potential fix would be to use a plug-in common or trended precision for such genes, but this is not implemented in the current version of DRIMSeq.\" We now perform post-hoc proportion SD filtering on the adjusted transcript p-values for DRIMSeq directly, which has little effect on the results. The SD of proportions and the p-values may possibly be independent under the null hypothesis of no DTU, which is the requirement for proper Type I error control of an independent filter [Bourgon (2010)], but we do not attempt to provide empirical evidence to support this. Importantly, we apply the post-hoc filtering because we have empirical evidence that DRIMSeq was not providing uniform p-values for null transcripts on the simulated data explored in this article. Therefore, we begin with a non-uniform distribution of p-values for the null transcripts. The filtering is shown empirically to improve the FDR control. We do not perform the simulation multiple times, and we have not extended iCOBRA to support multiple iterations on a single plot, which is beyond the scope of this article. We are most interested in the relative performance of the various methods, and their general location on the TPR-FDR plots, which is achieved with the current evaluation. We did explore running DEXSeq 25 times on the 3 vs 3 \"main\" simulation, and the inter-simulation variation in the TPR-FDR plot was minimal. We have uploaded all 24 of the simulated paired-end reads to Zenodo, and the dataset is already quite large. We do not run the methods on entirely null datasets, which is beyond the scope of this article. We have now used stageR on all methods. stageR accepts gene-level p-values (or adjusted p-values) and transcript-level p-values. If gene-level p-values are not provided by a method then DEXSeq's perGeneQValue was used to generate gene-level adjusted p-values, for use with stageR. We do not evaluate other methods for exon usage, as we focus in the workflow on Bioconductor methods that have been already proposed and evaluated for DTU analysis in publications. We now use consistent axes, and include the group size in the strip titles. We now evaluate DRIMSeq and DEXSeq on the identical simulation dataset used in both Soneson et al (2016) and Nowicka and Robinson (2016). We find similar performance of DEXSeq as reported in those papers using a less stringent transcript filter, but when we use DRIMSeq count and proportion filters as recommended in this workflow, the performance of DEXSeq is greatly improved, to levels consistent with what we see in the \"main\" simulation. Evaluation of DGE/DTE We clarify why a DGE and DTE evaluation is included. We do not perform a 2 replicate DGE or DTE evaluation, as this is beyond the scope of the article. We now breakdown the DGE and DTE results by simulated gene type. We do not see any strong enrichment of one simulated gene type in the false positive breakdown plots. We believe our evaluation may differ from others in exploring the consistency of results as sample size increases. Discussion We now include in the Discussion some recommendations on tool usage and performance."
}
]
},
{
"id": "35546",
"date": "30 Jul 2018",
"name": "Alicia Oshlack",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nA workflow to enable more people to perform differential transcript usage on their RNA-seq data set is a useful addition to the literature. Benchmarking methods and combinations of workflows are also an important part of the literature. In this manuscript, both things have been attempted, which unfortunately makes the manuscript a little blurred in its focus.\nWe view a workflow as an instructional manuscript in which a step-by-step analysis can be reproduced with a new data set that a user wants to bring to the analysis. This is presented in the sections Quantification and data import and Statistical analysis of differential transcript usage and, in our view, should be the focus of the manuscript. These are complex analyses combining several packages with several alternative paths. It would really help the user if a flowchart for this analysis could be made that shows the common parts of the workflow (e.g. starting with a Salmon, importing into R), how the alternatives split and which packages are used for alternative parts of the workflow. For example, DRIMseq is an alternative to DEXseq, which can then be followed by stageR, and Suppa is a complete (parallel) workflow.\nThe evaluation sections are somewhat useful and interesting in their own right, but rely on simulated data and are therefore not directly applicable to readers who are looking for workflows to guide them in their own data analysis. However, they do help users decide which workflows to choose in their own analysis.\nOverall we wonder if this manuscript could be two separate manuscripts: a workflow for DTU and an evaluation of methods based on simulated data? Another (preferable) alternative would be to only focus on DTU in the evaluation and keep the section Evaluation of methods for DTU as a guide to help the user to choose the workflow (with this clearly stated). We felt there were too many additional analysis introduced after this point which relied on more in-depth understanding of the DGE literature, which was not really the focus of the workflow.\nMinor comments: Several sections should be edited for clarity and flow of ideas. Specifically,\npage 6: \"We recommend scaledTPM for differential transcript usage so that the estimated proportions fit by DRIMSeq in the following sections correspond to the proportions of underlying abundance.\" Could the authors please rewrite/break up this sentence to improve readability? page 6, section 'Import counts into R/Bioconductor': the authors should clarify whether the referenced R package is for demonstration purposes only (i.e. should the user install the rnaseqDTU to perform any of the workflow?). page 6: could the concept of using counts from abundance be introduced/explained before referring to specific package parameters and settings? page 6: \"The following code chunk is not evaluated, but instead we will load a pre-constructed matrix of counts\". Could the authors please clarify this sentence? We assume this means that instead of constructing a matrix of counts (as in a typical workflow), pre-constructed data is loaded. page 7 \"We ran the following unevaluated code chunks\": does 'unevaluated' refer to not run in a typical workflow? page 7, 'Statistic analysis of differential transcript usage', second paragraph: could the description of txdf be moved to the previous section where it is constructed? This would help improve the flow. page 12: \"(2) contain a transcript with a transcript adjusted p-value less than 0.05 which does not participate in DTU, so contain a falsely confirmed transcript\": could the authors please rewrite this sentence for clarity. page 13: sentence \"The testing of “this” vs “others”...\" could be improved for clarity, e.g.: \"DEXseq in its original version requires fitting of coefficients for each exon within a gene. Running DEXseq at a transcript-level considerably improves performance as fewer features per gene require fitting of coefficients.\" page 14, after the line \"dxr <- as.data.frame(dxr[,columns]\": showing head(dxr) could help in clarifying the output. page 15, in the code \"paste0(\"suppa/group1.tpm\")\": the paste function is not necessary here. Section 'Evaluation of methods for DTU': could the authors offer an explanation why SUPPA2 only reported one DGE gene as DTU? Could the y and x axes on the plots on pages 17-20 and 25 be made consistent with each other? Also, very minor point, but these plots have some jpeg artefact. Could pdf or png plots be used instead? page 19 \"DRIMSeq [...] performed slightly better\": could a metric be referenced in how the package performed better? page 22: \"We can repeat the same analysis...\": 'same analysis' is misleading as this section tests only DGE. page 24: could the authors formally introduce or describe EBSeq and SAMseq packages, preferably earlier in the manuscript? page 26: could the authors use 'compute time' instead of 'timing'?\n\nWe identified the following typographical errors and grammatical issues:\npage 5: \"We recommend [constructing] a CSV file...\" page 6: \"We suggest for DTU analysis to generate counts from abundance...\" reword to \"For DTU analysis, we suggest generating counts from abundance...\" page 16: \"DEXSeq controlled [the FDR] except for...\" page 16: \"DRIMSeq had [an] observed FDR..\" page 16: \"...reported 2 extra genes more than...\" change to \"reported two more genes than\" page 16: \"...DEXseq were the most sensitive methods [for] recovering\" page 19 \"...DRIMSeq and DEXSeq[,] [in] this additional simulation\" page 19: \"Again, we caveat our comparative evaluation of DRIMSeq and DEXSeq by noting that we do not know...\" change to \"Again, a caveat of our comparative evaluation of DRIMSeq and DEXSeq is that we do not know...\" page 24: \"did not have [adequate] sensitivity to detect DGE\" page 24: \"while those two method[s] had\"\n\nIs the rationale for developing the new method (or application) clearly explained? Partly\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Partly",
"responses": [
{
"c_id": "3964",
"date": "14 Sep 2018",
"name": "Michael Love",
"role": "Author Response",
"response": "We thank all reviewers for their insightful comments and suggestions that we feel have greatly improved the readability and usefulness of the workflow. We summarize the main changes and then address reviewer-specific comments point-by-point: We have addressed all minor text or grammatical suggestions by the reviewers. We have re-organized the article into distinct and more separated Workflow and Evaluation sections, which was suggested by all reviewers. We begin the article with a clear outline, titled: \"Structure of this article\", which outlines the Workflow part and the Evaluation part. This outline has direct links to relevant sections and subsections which follow. We have also included an overview diagram of the methods and packages included in the Workflow section, and how they are interconnected. We have added to the Introduction more motivational text on why a DTU analysis is relevant for biology and biomedical research. We have added a large section describing the methods DEXSeq and DRIMSeq, before the Workflow section. We have expanded the original sections discussing counts-from-abundance and their use in the workflow, to make our use of the tximport method more clear. For the DEXSeq section, we have corrected an earlier incorrect use of nbinomLRT(), which is now replaced with the correct testForDEU(). The practical result is that DEXSeq performs somewhat less conservatively, but the original code was incorrect, and the fix is necessary. The incorrect use of nbinomLRT() in this context will now produce an error in future releases of Bioconductor, to avoid possible incorrect usage. We have added RATs to the DTU Evaluation. We now apply stageR to all DTU methods that are evaluated: DRIMSeq, DEXSeq, RATs, and SUPPA2. The RATs and SUPPA2 methods are described, but the code is not provided, as these packages are not part of the Workflow. We use consistent x-axes and y-axes whenever possible, and use PDF instead of JPG to reduce compression artifacts. When a consistent x-axis is not used in the main text, we include Supplementary Figures with the same plots with outlying methods dropped to keep the x-axis consistent. We use a palette in which colors are more discernable for color-blind readers In the Evaluation sections, we include additional plots which examine the simulated gene type source of false positives for the DTU, DGE, and DTE analyses. We added a new evaluation to examine performance differences between DRIMSeq and DEXSeq, using the identical simulated data that was used in Soneson et al (2016) and Nowicka and Robinson (2016). We have added a 2 vs 2 simulation for the DTU Evaluation. We added a brief overview description of all methods assessed in the DGE and DTE Evaluations. We have added more recommendations in the Discussion. Reviewer-specific comments: We have tried to separate and clarify the Workflow section and the Evaluation section. We now include an overview diagram, as helpfully suggested here. We have expanded the section on counts-from-abundance, added a section before the counts are imported, and clarified the sentences highlighted by the reviewers. We have clarified a number of the \"not evaluated\" sentences in the original workflow. The description of txdf is given in the section where it is constructed, under the heading \"Transcript-to-gene mapping\". We have clarified the OFDR description in the sentence highlighted by the reviewers, and have removed the \"this\" vs \"other\" sentence, as the history of DEXSeq method development is not necessary or useful for the readers of this workflow. We have added `head(dxr)` to demonstrate the output. We have removed the SUPPA2 code, as now the workflow focuses on the Bioconductor package DRIMSeq and DEXSeq, which have live code examples (SUPPA2 is a python package and so cannot have live code examples in a Bioconductor workflow). We have made the x- and y-axes consistent whenever possible. We have revised the Workflow and Evaluation sections following all of the reviewers' helpful comments, error spotting, and suggestions on improved wording."
}
]
},
{
"id": "35682",
"date": "13 Aug 2018",
"name": "Nick Schurch",
"expertise": [
"Reviewer Expertise Bioinformatics",
"RNA-seq",
"transcriptomics tools",
"benchmarking"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn 'Swimming downstream: statistical analysis of differential transcript usage following Salmon quantification' Love, Sonesson & Patro present both 1) a workflow for identifying the signatures of differential transcript usage between RNA-seq samples in two conditions, based on a suite of tools, and 2) a benchmarking analysis of the performance of these tools based on simulated data. The aims of this work are laudable and I have no doubt it will be a valuable addition to the literature, the resulting paper suffers from several flaws and needs considerable additional work, in my opinion.\nMajor comments:\n1) The intermingling of the benchmarking and workflow sections of this manuscript make the text confused and difficult to read. I'd suggest that the authors either restructure the manuscript beginning with the workflow section and then following with the benchmarking section, or split the work in to two and concentrate separately on the two areas.\n2) This work is listed as a Method article. I am not convinced that an example of stringing existing tools together fits the description required for this section (that is: \"Method Articles describe a new experimental, observational, or computational method, test or procedure (basic or clinical research).\"). The benchmarking part of the work is better suited to a Research Article, whilst the workflow part is more like a computational protocol and might be better suited for publication as a Study Protocol.\n3) Quantifying transcript expression from RNA-seq data is challenging but has become common-place and relatively straight-forward thanks to the development of high-performance tools such Salmon and Kallisto. These tools typically provide a transcripts-per-million estimation of a transcripts expression. With these quantifications in place the inevitable, and even more challenging, next step is to identify those transcripts where their expression is changing between samples. To date there has not been a clear data-driven exploration of the underlying statistical properties of TPM quantifications (or estimated transcript counts from TPMs) as a function of biological and technical replication - instead, much as was the case for differential gene expression from RNA-seq data until relatively recently - the tools for identifying DTE are built on the strong assumption of a distribution for the quantifications and, typically, assume a negative binomial distribution. Although this looks to be a good assumption in the case of gene expression, it is far from clear to me that the assumption of a negative binomial distribution for the distribution of a transcripts TPM or estimated counts across biological replicates is a good assumption for TPMs or estimated counts from TPMs, particularly given that - in the context of biological DTU - the expression of a transcript can be strongly correlated with the other child transcripts of the gene. The fixed per-gene dispersion section seems like the beginnings of an exploration in this area but this assumption too is without any justification. Perhaps the authors could use some highly replicated data from a complex eukaryote to actually measure these distributions and give clarity on whether these assumptions are valid? Or, failing that, explore the impact of different potential distributions of the tool performance?\n4) The entire discussion section of the benchmarking results is essentially missing and the current discussion section of more like a brief conclusion. Points that I would like to see the authors discuss in detail include:\nThe low overall TPRs exhibited by all the tools; 25-80% for DTU, 50-80% for DGE & only 20-50% for DTE. What this means for these field and how might these be improved? The TPR/FPR performance of the tools not only as function of the sample size, but also as a function of the annotation used in the original transcript quantitations, as a function of the effect-size threshold used and as a function of the low-count-rate filtering used for each tool. These are all critical parameters in the tools performance. An expanded discussion of the extremely poor FPR performance of DRIMseq, that is largely glossed-over in the current text. Why is DRIM-seq performing so poorly? It is more or less dependant on the specific parameters used, or the details of the simulated data, than the other tools - or is it just generically over-sensitive across all the parameter space. The overlap between the sets of DTU, DGE & DTE identified by each tool, instead the authors just give us some numbers and the TPR/FPR performance metrics. Are these tools reliably identifying the same thing or are they finding wildly different sets of results? (but please, no Venn diagrams! I can respectfully recommend upsetR for this kind of plot). The use of p-values, adjusted or not, as a threshold for subsetting these results for scientific relevance - particularly given Blume et. al 20181. Some discussion of why the authors limit themselves to discussing DRIMseq, DEXSeq and SUPPA2 despite listing five additional alternative methods in the introduction. Alternatively, the authors could include these tools in their benchmarking, particularly if they decided to split the work into two papers with one of these focussing on the benchmarking. Some discussion of the impact that the development of long-read sequencing of native RNAs will have on this field, these tools, and their results in the next few years - perhaps the authors could even use some of the publically available data from the Oxford Nanopore RNA consortium (https://github.com/nanopore-wgs-consortium/NA12878/blob/master/RNA.md) to contrast the performance of this new technology with the tools they examine here for detecting DTE and DTU. How do these tools cope with RNA-seq experiments with more complex designs? For example, what about if there are 7 conditions, or a time-series (see for example Calixto et. al., 20182? What approaches would the authors then recommend?\n\n5) No effort has been made to test these workflows with real data with validated instances of DTU. These exist in the published literature. For a workflow description this is fine, but for the benchmarking aspect of the work I would like to see the authors use this pipeline in anger, with real data, and see what the results are and how they match up with the validated results.\n6) The introduction does not motivate the importance of identifying DTU in biology. I'd like to see the introduction present the biological relevance of DTU, the relative sparsity of existing validated DTU instances, and the scope DTU has for being an explored layer of regulation for basic biological processes.\n7) The only conclusion from the paper seems to be that the authors recommend the use of stageR - based largely on the fact that its two-stage model matches what the authors think a typical analysis workflow is. This conclusion may be sound advice but a) this paper does not present any compelling *evidence* that this is a typical workflow, and b) stageR is not really what this paper is about\" Indeed, here stageR is used as a framework to assist with assessing the performance of the other tools. I'd like to see the authors instead draw some clear conclusions about which tools are the best to use for identifying DTU.\nMinor Comments:\n1) The workflow section really needs some workflow diagrams to highlight the chain for each tool and where they are similar and different.\n2) The plots in the paper are not as high quality as I'd expect: - Figures need to be higher resolution (this may be the journals fault, not the authors) - Figures 3,5,6,8,12 & 13 are multi-panel figures with the same axes on each figure. They would benefit from being plotted with shared axes allowing the performance between different samples sizes to be more clearly visible to the reader. - Figures 9-11: perhaps consider using a multi-panel 2d histogram to show the density profiles for each group, or at least using a better point symbol.\n\nIs the rationale for developing the new method (or application) clearly explained? No\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Partly",
"responses": [
{
"c_id": "3966",
"date": "14 Sep 2018",
"name": "Michael Love",
"role": "Author Response",
"response": "We thank all reviewers for their insightful comments and suggestions that we feel have greatly improved the readability and usefulness of the workflow. We summarize the main changes and then address reviewer-specific comments point-by-point: We have addressed all minor text or grammatical suggestions by the reviewers. We have re-organized the article into distinct and more separated Workflow and Evaluation sections, which was suggested by all reviewers. We begin the article with a clear outline, titled: \"Structure of this article\", which outlines the Workflow part and the Evaluation part. This outline has direct links to relevant sections and subsections which follow. We have also included an overview diagram of the methods and packages included in the Workflow section, and how they are interconnected. We have added to the Introduction more motivational text on why a DTU analysis is relevant for biology and biomedical research. We have added a large section describing the methods DEXSeq and DRIMSeq, before the Workflow section. We have expanded the original sections discussing counts-from-abundance and their use in the workflow, to make our use of the tximport method more clear. For the DEXSeq section, we have corrected an earlier incorrect use of nbinomLRT(), which is now replaced with the correct testForDEU(). The practical result is that DEXSeq performs somewhat less conservatively, but the original code was incorrect, and the fix is necessary. The incorrect use of nbinomLRT() in this context will now produce an error in future releases of Bioconductor, to avoid possible incorrect usage. We have added RATs to the DTU Evaluation. We now apply stageR to all DTU methods that are evaluated: DRIMSeq, DEXSeq, RATs, and SUPPA2. The RATs and SUPPA2 methods are described, but the code is not provided, as these packages are not part of the Workflow. We use consistent x-axes and y-axes whenever possible, and use PDF instead of JPG to reduce compression artifacts. When a consistent x-axis is not used in the main text, we include Supplementary Figures with the same plots with outlying methods dropped to keep the x-axis consistent. We use a palette in which colors are more discernable for color-blind readers In the Evaluation sections, we include additional plots which examine the simulated gene type source of false positives for the DTU, DGE, and DTE analyses. We added a new evaluation to examine performance differences between DRIMSeq and DEXSeq, using the identical simulated data that was used in Soneson et al (2016) and Nowicka and Robinson (2016). We have added a 2 vs 2 simulation for the DTU Evaluation. We added a brief overview description of all methods assessed in the DGE and DTE Evaluations. We have added more recommendations in the Discussion. Reviewer-specific comments: 1) We have followed the reviewer's suggestion, and have separated the Workflow and Evaluation sections, with an outline at the beginning clearly delineating the two sections, and an overview diagram. 2) We originally submitted our Bioconductor workflow as a \"Research\" article, but the Editorial Office recommended to change the categorization to \"Method\", which is the categorization of many of the other Bioconductor workflows. Bioconductor workflows are not intended to introduce new computational methods or new software packages, but to demonstrate, with live code that resides in an Rmarkdown vignette within an R package structure, how to use a number of different existing Bioconductor packages to analyze a dataset. We asked for comment from the Editorial Office on the recommended categorization of Bioconductor workflows under the F1000Research article types, and they provided us with the following statement: \"In general, Bioconductor workflows are classified as Method articles in F1000Research, since Research articles must present novel research findings, and Software Tool articles must present novel software tools. Since this article by Love et al neither presented novel research findings nor a new software tool, the F1000Research editorial office felt that classifying this article as a Method article was most appropriate. The majority of workflows submitted to the Bioconductor gateway will fall into this article type.\" -F1000Research Editorial Office 3) We have followed the reviewer's suggestion and included, in addition to the fixed per-gene dispersion simulation, an additional simulation from Soneson et al. (2016), to assess differences between DRIMSeq and DEXSeq, the two methods that are the focus of the workflow. This simulation involved generation of Negative Binomial gene counts, and then the expression was distributed from genes to transcripts by per-sample draws from a Dirichlet distribution, with a minority of genes undergoing DTU across condition. Analysis of additional datasets, and a final determination of which type of data-generating process is closer to various real RNA-seq datasets, is beyond the scope of this workflow, but we feel that the existing simulations cover a range of possibilities and are useful to the readers of the workflow. We comment in a number of places on the limitations of the simulation, including in the overview: \"While the evaluations rely on simulated data, and are therefore relevant only to the extent that the simulation model and parameters reflect real data, we feel the evaluations are useful for a rough comparison of method performance, and for observing relative changes in performance for a given method as sample size increases.\" Also at the end of the DTU Evaluation: \"Again, a caveat of all of our comparative evaluations of DRIMSeq and DEXSeq is that we do not know whether various real RNA-seq experiments will more closely reflect heterogeneous dispersion or fixed dispersion within genes, or if the counts within gene are better modeled by distributing gene-level abundance to transcripts via a Dirichlet distribution as in Soneson et al (2016). However, we have examined simulations reflecting each of these cases, and confirmed that minimum count and minimum proportion filtering benefit both DRIMSeq and DEXSeq.\" 4) We now include more discussion on the results of the evaluations in the Discussion, including a comment on statistical power. We include a breakdown of false positives by the simulated gene type. Further cross-section of all methods' performance by incomplete annotation, effect size filters, and various count or proportion filters is beyond the scope of the article. Complete analysis of overlap of calls across the various simulations and analyses is also beyond the scope of the article. We now explore DRIMSeq's performance in the \"main\" and \"fixed per-gene dispersion\" simulations, wherein we see that many of the excess false positives at the transcript-level arise from simulated DTU genes, so other transcripts not participating in DTU were being reported as significant. In the “main” simulation, where DRIMSeq has the most problem with FDR control, it only slightly exceeds a target 10% FDR at the gene level at per-group sample sizes 6 and higher. With proportion SD filtering, DRIMSeq at the transcript level also has small inflation of target 10% FDR for per-group sample sizes 6 and higher. We now include RATs as an additional method evaluated on the \"main\" simulation for DTU analysis. RATs performs similar to SUPPA2, in that it nearly always controls the FDR, although in some cases, it displays higher gene-level sensitivity than SUPPA2. We do not intend the article to be a complete evaluation of all existing methods for DTU, but to compare the two Bioconductor methods that are the focus of the workflow with a few key DTU methods. Extended discussion of long-read sequencing is beyond the scope of the article, although we added the following comment to the workflow section on importing counts: \"If a different experiment is performed and a different quantification method used to produce counts per transcript which do not scale with transcript length, then the recommendation would be to use these counts per transcript directly. Examples of experiments producing counts per transcript that would potentially not scale with transcript length include counts of full-transcript-length or nearly-full-transcript-length reads, or counts of 3' tagged RNA-seq reads aggregated to transcript groups. In either case, the statistical methods for DTU could be provided directly with the transcript counts.\" A relevant quote from Nowicka and Robinson (2016) is: \"With emerging technologies that sequence longer DNA fragments (either truly or synthetically), we may see in the near future more direct counting of full-length transcripts, making transcript-level quantification more robust and accurate.” In the \"DTU testing\" section, we now discuss how DEXSeq and DRIMSeq can be used to evaluate experiments with complex designs, with little limitation as long as the coefficients for each sample can be encoded as a design matrix multiplied by a vector of coefficients. 5) Comprehensive evaluation of the methods on additional datasets is beyond the scope of the article. 6) Following this and other reviewers' suggestion, we have now added motivation to the first part of the Introduction as to why DTU is relevant for biological or biomedical research. 7) We have revised some of our description of the stageR framework to be more clear about why we recommend its use in a DTU workflow: \"It is likely that an investigator would want both a list of statistically significant genes and transcripts participating in DTU, and stageR provides error control on this pair of lists, assuming that the underlying tests are well calibrated.\" We also provide some more details in the Discussion regarding the various methods and their performance. Minor Comments: 1) We have added an overview diagram as Figure 1. 2) We have updated figures to be PDF instead of JPG, and made the axes more consistent when possible."
}
]
}
] | 1
|
https://f1000research.com/articles/7-952
|
https://f1000research.com/articles/7-1588/v1
|
01 Oct 18
|
{
"type": "Research Article",
"title": "Soil is a key factor influencing gut microbiota and its effect is comparable to that exerted by diet for mice",
"authors": [
"Dongrui Zhou",
"Zhimao Bai",
"Honglin Zhang",
"Na Li",
"Zhiyu Bai",
"Fudong Cheng",
"Haitao Jiang",
"Chuanbin Mao",
"Xiao Sun",
"Zuhong Lu",
"Zhimao Bai",
"Honglin Zhang",
"Na Li",
"Zhiyu Bai",
"Fudong Cheng",
"Haitao Jiang",
"Chuanbin Mao",
"Xiao Sun",
"Zuhong Lu"
],
"abstract": "Exposure to an unsanitary environment increases the diversity and alters the composition of gut microbiota. To identify the key element in the unsanitary environment responsible for this phenomenon, we investigated the effect and the extent by which the soil in our environment influenced the composition of gut microbiota. Results show that adding unsterile or sterile soil to bedding, either before birth or after weaning, influences significantly the composition of mice gut microbiota. Specifically, unsterile soil increases the richness and biodiversity of gut microbiota. Interestingly, based on UniFrac distance analysis of 16S rRNA sequences, the impact of soil on gut microbiota is comparable to that exerted by diet. These findings provide a potential new strategy for intervening on the human gut microbial community and preventing disease.",
"keywords": [
"gut microbiota",
"hygiene hypothesis",
"factors influencing gut microbiota",
"16S rRNA sequencing"
],
"content": "Introduction\n\nThe relationship between human gut microbiota and certain diseases has become increasingly apparent1,2. The main contributing factors are delivery mode3–5, age6,7, antibiotic treatment4,6, diet, and the living environment. They affect gut microbiota to different degrees. Among these factors, diet is the most widely studied one, as it modulates the composition and then changes the function of the microbial community in humans8–15. The microbiota of malnourished children was shown to impair growth and cause metabolic abnormalities in the brain and other organs of recipient gnotobiotic mice16,17. By altering the gut microbiota, diet also contributes to other chronic illnesses, such as obesity1,2, cardiovascular diseases18, and autism19. The human gut microbiota responds rapidly to dietary changes8,20, even though long-term dietary habits remain the dominant force in determining the composition of an individual’s gut microbiota10,15,21,22.\n\nFollowing what is known as the hygiene hypothesis, an overly clean modern life leads to immune dysfunction and more allergic diseases23,24. Many studies have shown large differences in the human gut microbiome between populations in rural areas across the world and those in Europe or America7,9,25. Our early research suggested that the cleanness of a living environment substantially altered the composition of gut microbiota in mice26. After analyzing the elements in bedding material, it was first proposed that microbes might play a key role in changing the composition of mouse gut microbiota27. However, a follow-up study showed that adding microbes to the bedding had a limited effect on the composition of the dominant genera28. Thus, the unsanitary environment factors that act on gut microbial communities remain to be identified.\n\nUnsanitary environments are rich in surface soil. This hosts a large number of microbes and is rich in nutrients. At present, it is unclear what effect the soil in our environment exerts on the composition of our gut microbiota, to what extent it causes such alterations, and what the time-scale of any changes may be. To understand these interrelated issues, we raised C57 mice on four different beddings. Groups were designed to explore the effect of sterile and unsterile soil, the extent by which soil compared with normal diet or high-fat diet (formula shown in Supplementary Table 1), as well as the timing of exposure to soil (before birth or after weaning) on the gut bacterial community (Supplementary Figure 1).\n\n\nResults\n\nWe analyzed the bacterial community composition using MiSeq sequencing targeted to the 16S rRNA gene in up to 180 fecal samples (Supplementary Table 2). For each sample, we PCR-amplified the V4 hypervariable region using the 515F-907R primer set29. The sequencing data set consisted of 10,937,871 high-quality, classifiable 16S rRNA gene sequences, with at least 45,816 sequences per sample (Supplementary Table 2).\n\nOn the basis of the obtained sequences, we discovered that sterile and unsterile soil significantly altered gut bacterial community composition (Figure 1 and Supplementary Table 3) in a diet-dependent manner. Mice fed a normal diet and living on sterile soil after weaning exhibited significantly more Bacteroidetes (P < 0.001) and fewer Firmicutes (P < 0.001) (Figure 1A and Supplementary Table 3a). Unsterile soil added before birth also significantly increased the number of Bacteroidetes (P = 0.032). Instead, mice fed a high-fat diet and raised on bedding with unsterile soil added before birth or after weaning showed significantly more Actinobacteria (P < 0.001) (Figure 1B and Supplementary Table 3b).\n\nValues from all available fecal samples were averaged (n = 8, 9, or 10 per treatment). (A,B) Relative abundance of bacteria on the eighth week in fecal samples of mice raised on specific-pathogen-free grade (SPF) sterile bedding (control), sterile bedding with sterile soil added after weaning (SS), sterile bedding with dirty unsterile soil added after weaning (SD), and sterile bedding with unsterile soil added prior to birth (DE) fed (A) a normal diet (ND) or (B) a high-fat (HF) diet. (C–F) Relative bacterial abundance in samples collected on the fourth (W4), fifth (W5), seventh (W7), and eighth (W8) week from mice of the (C) SPF-ND group, (D) SPF-HF group, (E) SD-ND group, and (F) SD-HF group.\n\nMicrobes in the environment were reported to affect the colonization of the intestinal microflora in newborns30. Here, we found that microbes and soil in the bedding could change the composition of gut microbiota after mice were weaned. When mice were fed a normal diet and lived on bedding with unsterile soil, they consistently showed more Bacteroidetes (P < 0.001) and fewer Firmicutes (P < 0.001) (Figure 1E and Supplementary Table 4a) during the first 3 weeks. After the third week, the abundance of these two phyla returned to the level from the previous week. When fed a high-fat diet and provided with bedding containing unsterile soil, mice showed an increased abundance of Actinobacteria on the third week of treatment (P = 0.030), becoming highest on the fourth week (P < 0.001) (Figure 1F and Supplementary Table 4b). In summary, we observed that the gut microbiome could respond rapidly to alterations in the living environment and diet, potentially facilitating adaptation to a variety of lifestyles.\n\nWe further discovered that unsterile soil could increase microbial diversity, particularly for mice on a normal diet, whether it was added before birth or after weaning (Figure 2, Supplementary Figure 3, and Supplementary Table 5, Supplementary Table 6). By comparison, microbial diversity was unaffected by sterile soil in mice fed a normal diet and decreased in those on a high-fat diet (Supplementary Figure 3 and Supplementary Table 5). Earlier, we found that merely adding environmental microbes to the bedding could increase diversity but had little effect on the composition of dominant bacteria in mice gut microbiota28. Hence, microbes appear to contribute mainly to microbial diversity, whereas soil may alter microbial community structure.\n\n(A–C) SPF and SD groups fed a normal diet: (A) operational taxonomic units (OTUs), (B) Chao 1 estimates, and (C) Shannon index. (D–F) SPF and SD groups fed a high-fat diet: (D) OTUs, (E) Chao 1 estimates, and (F) Shannon index. Values from all available fecal samples were averaged (n = 8–10; *P < 0.05, **P < 0.01, based on a two-tailed least significant difference test).\n\nAn epidemiological survey has shown that immune system diseases, such as asthma and wheeze, display a skewed sex bias towards males31. Here, we found that abundance of the phyla Bacteroidetes and Firmicutes was related to gender, except in mice with sterile bedding with unsterile soil added prior to birth (DE group) exposed to unsterile soil bedding from birth (Supplementary Figure 4, Supplementary Figure 5 and Supplementary Table 7). Bacteroidetes were more abundant in males than in females of the mice raised on specific-pathogen-free bedding (SPF group) fed a normal diet (P = 0.019). By contrast, Firmicutes were more abundant in males than in females in the mice raised on sterile grade murine bedding with dirty unsterile soil (6:11 (w/w)) added after weaning (SD group) on a high-fat diet (P = 0.042) (Supplementary Figure 5 and Supplementary Table 7).\n\nRandom Forests is a supervised machine-learning technique that can classify a sample by estimating the importance of OTUs at species level according to their relative abundance and sample probability32. We found that distinct community signatures existed between any two treatments (Supplementary Table 8–Supplementary Table 10). For mice on a high-fat diet, there were 79 predictive species-level OTUs between the (SPF) and DE groups (baseline error = 0.44, cross-validation error = 0 ± 0), of which 63 were overrepresented in the DE group. Moreover, out of these 63, 25 and 20 were assigned to the classes Actinobacteria and Bacilli, respectively (Supplementary Table 8b). A comparison between SPF and SD groups gave similar results (Supplementary Table 8a); however, the decrease in beneficial bacteria was greater in the mice raised on sterile grade murine bedding with sterile soil (6:11 (w/w)) added after weaning (SS group) than in the SPF group (Supplementary Table 8c).\n\nWe further analyzed differences in the composition of gut bacterial communities between any two groups of mice on a normal diet (Supplementary Table 9). Compared to the SPF group, DE and SD groups presented more Actinobacteria and Bacilli, whereas the SS group showed more Bacteroidia (Supplementary Table 9). Notably, bacteria of the Bacteroidales S24–7 family were more prevalent among SPF mice fed a high-fat diet than in those fed a normal diet. Instead, the latter had more bacteria of the families Lachnospiraceae and Ruminococcaceae (Supplementary Table 10a). The DE group fed a high-fat diet showed more Actinobacteria and Bacilli than animals fed a normal diet. On the contrary, the latter had more Bacteroidales S24–7 and Lachnospiraceae (Supplementary Table 10c). Thus, unsterile/sterile soil bedding affected the structure of mice gut microbial communities in a diet-dependent manner. The effect varied depending on whether i) mice were raised on bedding with soil added before birth or after weaning, ii) the bedding contained sterile or unsterile soil, and iii) mice were fed a normal or high-fat diet.\n\nWe used UniFrac distances to analyze the 16S rRNA datasets and measure similarities among microbial communities. The distance between mice exposed to two different living environments and fed a high-fat diet was similar to that between mice using the same bedding, and fed either a normal or a high-fat diet (Figure 3 and Supplementary Table 11). The distance between SPF groups fed a normal or a high-fat diet did not differ significantly from that between SPF and SS, and between SD and DE mice fed a high-fat diet. Indeed, these groups were among the least distant. Distances between mice maintained on different beddings and a high-fat diet were greater than those between SPF groups fed different diets. Thus, soil appears to influence the composition of the gut microbial community to the same extent as diet. In addition, a change in living environment also had an important effect on gut microbiota. When fed a normal diet, microbiota in the gut of SPF and DE mice were more similar than those of other groups (Figure 3 and Table 11).\n\nFor mice fed a normal diet, the shortest UniFrac distance was between specific-pathogen-free (SPF) and sterile bedding with unsterile soil added prior to birth (DE) groups, and the longest one was between sterile bedding with dirty unsterile soil added after weaning (SD) and sterile bedding with sterile soil added after weaning (SS) groups; comparisons between SPF and SD, and between DE and SS showed no significant differences. For mice fed a high-fat diet, the shortest distances were SPF–SS and SD–DE, with no significant difference between them; the longest distances were SD–SS and DE–SS; SPF–SD was significantly longer than SPF-DE (n = 8–10; *P < 0.05, **P < 0.01, based on a two-tailed least significant difference test).\n\nUnsupervised clustering using PCoA of UniFrac distance matrices confirmed that living environments and diets explained the variation in the data set (Figure 4 and Supplementary Figure 6, Supplementary Figure 7). When fed the same diet, the gut microbial communities of mice from one living environment clustered apart from those of mice subjected to other treatments (Figure 4A, B). For two different living environments, the two high-fat diet groups showed a similar transfer distance as mice maintained on the same bedding but fed two different diets (Figure 4C, D and Supplementary Figure 6). For the SPF group fed a normal diet, almost no differences were observed among fecal samples collected at four different time points (Supplementary Figure 7A). These results confirm that the living environment exerts a great influence on the composition of gut microbiota. It should be noted that we did not find significant clustering with respect to cage or gender.\n\nAnalysis was based on the Illumina bacterial 16S rRNA gene data set (V4 region). Mice from the four groups were fed a (A) normal or (B) high-fat diet. (C) Mice in the sterile bedding with dirty unsterile soil added after weaning (SD) and specific-pathogen-free (SPF) groups were fed a normal or high-fat diet. (D) Mice of the sterile bedding with sterile soil added after weaning (SS) and sterile bedding with unsterile soil added prior to birth (DE) groups were fed a normal or high-fat diet.\n\n\nDiscussion\n\nIn the present study, we identified differences in gut microbiota between mice living on sterile or unsterile soil and fed a normal or a high-fat diet. Sterile and unsterile soil substantially altered the composition of gut microbiota, with unsterile soil also increasing bacterial diversity. In addition, we demonstrated that unsterile soil added before birth or after weaning altered the gut microbiome, irrespective of whether mice were fed a normal or a high-fat diet. Hence, we suggest that soil is one of the key factors influencing gut microbiota and that its effect is comparable to that exerted by diet.\n\nMice have the habit of coprophagy and fur combing, through which they may swallow some soil. As a result of agriculture and animal husbandry, our ancestors were also in a close and daily contact with soil. This practice is still common among infants (age 12–14 months), who crawl on grounds rich in soil and animal feces. During this period, they use their mouths to feel the world around them and experience a rapid development of the immune and nervous systems. Our results may explain in part some epidemic diseases occurring in modern life.\n\nWe show that sterile soil present in the surrounding environment has a role in shaping the composition and structure of gut microbiota. At birth, the gut is sterile and ambient microbes play a key role in shaping the bacterial community3–6,30,33. Our early research showed that microbes in the environment had a limited capacity to influence the composition and structure of dominant bacterial communities in adult mice28. Here, we show that sterile soil in the bedding significantly altered the composition and structure of gut microbial communities in weaned mice, but had little effect on their diversity and richness. By contrast, unsterile soil changed microbiota composition and structure, as well as its diversity and richness. Together, these studies indicate that a healthy gut microbiota benefits from a close contact with soil and the microbes in it. It is possible that unsterile soil promotes the growth of certain bacteria, while inhibiting others, especially in mice fed a high-fat diet.\n\nThe most obvious effect of unsterile soil on the intestinal microflora was the predominance of the phylum Actinobacteria in mice fed a high-fat diet. Vatanen et al.34 studied the development of gut microbiota in infants from birth until the age of 3 years and found that early-onset autoimmune diseases were common in Finland and Estonia but were less prevalent in Russia. Their results showed that Actinobacteria were more abundant among infants under 12 months in Russia than in Finland and Estonia; however, there was no difference after the age of 1 year34. This might be explained by small infants having a diet rich in protein and fat, and those in Russia being in closer contact with unsterile soil.\n\nThe results also indicate how the timing of exposure to unsterile soil influences the formation of bacterial communities. Some studies reported that early environmental exposure exerted a sustained influence on the acquired gut microbiota, and that the early microbiota shaped the later one35–37. Our results reveal that microbiota differed depending on whether mice were exposed to unsterile soil early or after weaning. It remains to be determined whether the difference between gut microbiota of mice that were exposed to soil before birth or after weaning influences the animals’ health.\n\nOur synergetic assessment of host soil and diet provides additional insight into the effect of diet on gut microbiota, and the possible effects on colorectal cancer and immune system diseases. We speculate that differences in bacterial flora mirror a different metabolite composition. Epidemiological studies indicate that red and processed meat intake is associated with an increased risk of colorectal cancer38,39 and diseases of the immune system40. However, in pastoral areas, such as Tibet, where the local diet is rich in beef and mutton, mortality from cancer and childhood asthma is the lowest in China41,42. Compared with a modern western lifestyle, the residents of Tibet are in close daily contact with soil. To this end, unsterile soil may alter microbiota composition and structure, increasing its diversity and richness, and affecting the metabolic processing of red meats. This, in turn, may enhance human immunity and resistance to chronic diseases; however, this mechanism needs to be confirmed via further research.\n\nOur results provide further evidence in support of the hygiene hypothesis. In developed countries, human living environments and lifestyle have changed immensely during the last few decades. People spend ~92% of their time indoors43 and there is very little soil in their living environment. At the same time, many microbes appear to have been lost from the modern human body7,9,25, while the burden of chronic inflammatory diseases, including atopic diseases, has increased dramatically. Our results suggest that an important cause of these diseases is represented by the lack of contact with soil and soil microorganisms.\n\nThe cleanness of our living environment and diet could be readily modified. This approach may provide a new simple route to intervene on the human gut microbial community, as well as design specific diets aimed at disease prevention. A few open questions nevertheless remain. Further research will determine how the soil from the host environment alters human gut microbiota, whether through swallowing, body mucosae, or by other means. Similarly, it remains to be investigated whether different types of soil or soil microbes have different effects on intestinal microecology.\n\n\nMethods\n\nWe purchased 6-week-old male (n=1) and female (n=3) C57 mice from B & K Universal (Shanghai, China) for breeding. One male was mated with three females. After the females became pregnant, two of them were shifted to a separate cage. The newborn mice were weaned and when they were 4 weeks old they were moved to separate cages.\n\nWe selected 8 males and 24 females from among the weaned mice to carry out further experiments. Next, two males and six females were selected from different cages and transferred to room number 2. The cages were furnished with sterile grade murine bedding and unsterile soil (6:11 w/w bedding to soil). The remaining 6 males and 18 females were raised in room number 1, on sterile grade bedding. The bedding was changed once a week. The soil used in the experiments was from the top 10 cm of a farm ground with abundant goats, hens, and ducks.\n\nWhen mice were 6 weeks old, they were bred with 1 male per 3 females. In room number 1, 30 males and 30 females, aged 23–37 days (weight, 8.6–21.4 g) the fourth week group, W4), were selected for experiments. They were randomly separated into three groups: the SPF group, raised on specific-pathogen-free bedding; the SD group, raised on sterile bedding with unsterile soil (6:11 (w/w)) added after weaning; and the SS group, raised on sterile bedding with sterile soil (6:11 (w/w)) after weaning. The SPF group served as the control. The randomization procedure was as follows. Considering gender matching, more than 80 mice were used in this experiment, and it was very difficult to breed so many mice in the same time. When we had about 20 mice with an age difference within about 10 days, we divided them into two groups, one fed with normal diet and the other with a high-fat diet. These 20 mice had been treated with same bedding. The other groups of 20 mice were treated with the second or third of type bedding. The breeding mice were uniformly distributed and randomly grouped. The SD group was transferred to room number 2 after weaning. Next, 10 males and 10 females, who were born in room number 2, aged 23–33 days (W4) were selected for experiments and raised on the same kind of bedding as before weaning (group DE). For all experiments, two or three mice were kept in a cage and the bedding was changed once a week.\n\nFor each group, 10 mice (five males and five females) were fed the same sterile grade commercial normal pellet diet as before weaning, while 10 mice (five males and five females) were fed a sterile grade commercial high-fat pellet diet after weaning (Supplementary Table 1). All animals were fed the same sterile distilled water. They were allowed free access to water and food. All animals were housed in two specific-pathogen-free animal rooms with a 12-h light/12-h dark cycle at 24°C ± 2°C and humidity of 40% ± 5%.\n\nFor SPF and SD groups, the first fecal samples were collected before mice were shifted to separate cages, and then on the fifth, seventh, and eighth week (Supplementary Figure 2). For DE and SS groups, fecal samples were collected on the eighth week (Supplementary Figure 2). After collection, fecal samples were put immediately into an ice box and stored at −80°C within 8 h.\n\nAnimal experiments were performed in strict accordance with the guidelines of the Animal Research Ethics Board of Southeast University. All experiments were approved by the Animal Care Research Advisory Committee of Southeast University and the National Institute of Biological Sciences (approval number: 2014063009). All efforts were made to alleviate the suffering of animals.\n\nSpecifically, mouse health was monitored every other day and body weight measurements were performed every week. Mouse health was assessed by observing changes in body weight or fecal shape, and external physical appearance. The animals would be euthanized to minimize pain and distress if they became severely ill over the experimental period, as when they showed one or more symptoms, such as 15–20% weight loss, diarrhea, loss of hair quality, pain (arched back and curled up posture), or listlessness for more than 1 week.\n\nEuthanasia was performed as follows. A 1% (w/v) dose of sodium pentobarbital (50 mg/kg) was administered through an intraperitoneal injection using a hypodermic needle. When mice lost consciousness, they were killed by neck dislocation and death was confirmed. Two individuals were required to perform injections, one to hold the animal and the other to perform the procedure. Animals were not left unattended during the procedure. The ARRIVE reporting guidelines were followed during this study (Supplementary File 1)44.\n\nGenomic DNA was extracted from fecal samples according to the protocol proposed by Zoetendal et al.45. Each sample (40–60 mg) was weighed and put into 15 ml 1 × PBS (pH 7.4). Samples were re-suspended completely by vigorous shaking and subsequently centrifuged at 700 × g at 4°C for 5 min. The supernatants were transferred to a 15-ml tube and centrifuged at 9000 × g at 4°C for 5 min. The pellets were transferred to a 2-ml Eppendorf tube and re-suspended in 300 μl 10× TE buffer (10 mM Tris-HCl pH 8.0, and 1 mM EDTA pH 8.0). Lysozyme (100 μl, 200 mg/ml) was added to each tube, and incubated for 1 h at 37°C. Subsequently, 50 μl 10% SDS (w/v) and 20 μl proteinase K (20 mg/ml) were added to the samples and incubated for 2 h at 50°C. Next, 100 μl 5 M NaCl was added, followed by incubation at 65°C for 10 min. The samples were mixed gently with an equal volume of Tris-Phenol (pH 8.0) for 5 min. The mixtures were centrifuged at 9000 × g for 5 min. The upper layer was transferred to a new tube. This step was repeated twice, after which chloroform was used instead of Tris-Phenol to clean the samples one more time. Finally, a 1/10 volume of 3 M sodium acetate (pH 5.2) and two volumes of 95% ethanol were added. Samples were mixed gently and centrifuged at 4°C and 9000 × g for 20 min. The precipitated genomic DNA was washed twice in 200 μl 70% ethanol. The dried DNA samples were re-suspended in 50 μl sterile double-distilled water. DNA quality was monitored by gel electrophoresis and spectrophotometry (260/280 nm). The samples were stored at −20°C.\n\nThe analysis of gut microbial communities was performed using MiSeq high-throughput sequencing of bacterial 16S ribosomal RNA (rRNA) genes. For each sample, the V4 hypervariable region was amplified using the 515F and 907R primer set, as described by Angenent et al.29. PCR was performed with 300 ng microbial community DNA as a template, using 1 μl of Trans Start Fast Pfu Taq DNA Polymerase (TransGen Biotech, Beijing, China), 0.25 mM of each dNTP, 1× Trans Start Fast Pfu buffer, 0.2 μM forward primer, and 0.2 μM reverse primer, in a total reaction volume of 50 μl. The cycling conditions were: 94°C for 3 min followed by 27 cycles of 95°C for 45 s, 50°C for 45 s, and 72°C for 45 s, with a final extension at 72°C for 5 min. Each sample was amplified in duplicate, combined, purified, and then quantified using the QuantiFluor™-ST Handheld Fluorometer with UV/Blue Channels (Promega Corporation, Fitchburg, WI, USA). Sequencing of the PCR product libraries was performed on an Illumina MiSeq platform (Illumina Inc., San Diego, CA, USA) using 2 × 250 bp chemistry. We merged the raw paired-end reads using FLASh software version 1.2.7, set to a minimum overlap of 10 bp. Other parameters were set to default settings. PCR artifacts were removed by eliminating low-quality sequences using Trimmomatic version 0.3046. Chimeric sequences were removed using Usearch version7.147. We clustered the quality-checked sequences into de novo OTUs at a 97% similarity threshold using the QIIME software version 1.8.0 software package48. The generated OTUs were classified using the RDP classifier software version 11.5 at a 70% confidence threshold for sequences longer than 200 bp49.\n\nAll samples were rarefied down to 45,816 sequences per sample to prevent bias due to sampling depth. We used QIIME to calculate α diversity indices of Chao1 estimator and Shannon diversity/richness for each sample48. We calculated the metrics of unweighted UniFrac distances between any two treatments and performed principal coordinates analysis (PCoA) through QIIME to examine dissimilarities in community composition. PCoA was used to compare groups of samples based on unweighted UniFrac distance metrics by plotting n samples in (n − 1)-dimensional space.\n\nRandom Forests analysis in R version 3.2.37 was performed with 500 trees and all default settings using 16S rRNA-based OTUs from the Illumina V4 data sets as described previously50. We used out-of-box (OOB) error to estimate the generalization error for all 16S rRNA comparisons involving all treatments. For each comparison, 100 relevant subsets of samples were extracted from the table of OTUs, and the average OOB error estimates and OTU importance estimates were calculated from subset samples. For a direct evaluation of the predictive strength of the OTUs, we compared generalization errors at various sequencing depths: the lowest observed depth of 45,816 sequences and at sequencing depths of 100, 1000, 10,000, and 40,000 reads per sample. The mean and standard deviation of the OOB error were estimated for each classification task using 100 independent rarefactions of the data. The expected ‘baseline’ error was obtained by a classifier that simply predicted the most common class label.\n\nT- test and analysis of variance with post hoc Tukey’s test, for comparison of two or more than two groups, respectively, were performed using SPSS software, version 18.0 (SPSS, Inc., Chicago, IL, USA).\n\n\nData availability\n\nMetagenomic sequence data for each mouse are available from the Sequence Read Archive, accession number, PRJNA491246: https://identifiers.org/insdc.sra/PRJNA491246.",
"appendix": "Grant information\n\nThis work was supported by The National Natural Science Foundation of China (Grants No. 31770540 and 61472078).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe thank Liling Zhang, Zhencheng Xue, Shengqin Wang, Xiaoniu Dai, Yue Hou, and Chunpeng He for technical assistance; and Xiaorong Wang for animal experiments.\n\n\nSupplementary material\n\nSupplementary Figure 1. Mice treatments.\n\nMice were raised on four types of bedding, each with two kinds of diet, resulting in four groups fed a normal diet (ND) and another four fed a high-fat diet (HF). The first group of mice was raised in room 1 in cages with sterile bedding of specific-pathogen-free grade (SPF) without soil (SPF-ND and SPF-HF) and served as controls. The second group was raised in cages with sterile bedding (same as the control) (SS-ND) and with sterile soil (6:11 (w/w) bedding to soil) added after weaning (SS-HF). The third group (SD-ND and SD-HF) was taken from room 1 to room 2 after weaning and then raised in cages with sterile bedding and unsterile soil (6:11 (w/w)). The last group (DE-ND and DE-HF) was born and raised in a cage with sterile bedding and unsterile soil (6:11 (w/w)).\n\nClick here to access the data.\n\nSupplementary Figure 2. Time-line indicating collection of fecal samples, bedding and diet conditions.\n\nFor the specific-pathogen-free, no soil (SPF) and sterile bedding, unsterile soil after weaning (SD) groups, fecal samples were collected four times. The first time was after mice were weaned at the age of 4 weeks (4W). For the SD group, the first sampling took place before addition of unsterile soil (and a high-fat diet). For the SPF-HF group, the first sampling occurred before exposure to a high-fat diet. The second, third, and fourth sampling times corresponded to the ages of five (5W), seven (7W), and eight (8W) weeks. For sterile bedding, unsterile soil (DE) and sterile bedding, sterile soil (SS) mice, samples were collected just once, at eight weeks. For the DE group, unsterile soil was added to the bedding before birth, for SD and SS mice it was added after weaning at the age of four weeks. All high-fat diet treatments were performed at the age of 4 weeks just after the mice were weaned.\n\nClick here to access the data.\n\nSupplementary Figure 3. Bacterial richness and diversity of fecal microbiota sampled on the eight week: comparison between living bedding groups fed a normal or high-fat diet.\n\n(A, B) Rarefaction curves of Chao1 estimators for mice fed (A) a normal or (B) a high-fat diet. (C, E, G) Mice fed a normal diet showing: (C) Chao1 estimators, (E) unique operational taxonomic units (OTUs), and (G) Shannon index. (D, F, H) Mice fed a high-fat diet showing: (D) Chao1 estimators, (F) unique OTUs, and (H) Shannon index. Values from all available samples were averaged. n = 8, 9, or 10 per treatment. *P < 0.05 and **P < 0.01 based on a two-tailed least significant difference test. SPF, sterile bedding/no soil; SD, sterile bedding/unsterile soil after weaning; DE, sterile bedding/unsterile soil before birth; SS, sterile bedding/sterile soil after weaning.\n\nClick here to access the data.\n\nSupplementary Figure 4. Changes in the relative abundance of the Bacteroidetes, Actinobacteria, and Firmicutes phyla for male and female mice fed a normal diet.\n\nValues from all available samples were averaged (n = 3, 4, or 5 per treatment). (A) Female and (B) male mice of the sterile bedding/no soil (SPF), sterile bedding/sterile soil after weaning (SS), sterile bedding/unsterile soil after weaning (SD), and sterile bedding/unsterile soil before birth (DE) groups collected on the eighth week. (C to F) Samples collected on the fourth (W4), fifth (W5), seventh (W7), and eighth (W8) week: (C) female and (D) male mice of the SPF group; (E) female and (F) male mice of the SD group.\n\nClick here to access the data.\n\nSupplementary Figure 5. Changes in the relative abundance of the Bacteroidetes, Actinobacteria, and Firmicutes phyla for male and female mice fed a high-fat diet.\n\nValues from all available samples were averaged (n was 3, 4, or 5 per treatment). (A) Female and (B) male mice of the sterile bedding/no soil (SPF), sterile bedding/sterile soil after weaning (SS), sterile bedding/unsterile soil after weaning (SD), and sterile bedding/unsterile soil before birth (DE) groups collected on the eighth week. (C to F) Samples collected on the fourth (W4), fifth (W5), seventh (W7), and eighth (W8) week: (C) female and (D) male mice of the SPF group; (E) female and (F) male mice of the SD group.\n\nClick here to access the data.\n\nSupplementary Figure 6. Principal coordinates analysis (PCoA) of unweighted UniFrac distances between fecal microbiota of mice from four living bedding conditions sampled on the eighth week.\n\nThe analysis was based on the V4 region of bacterial 16S rRNA gene sequencing data sets. The different treatments are color-coded. (A) sterile bedding/no soil (SPF) versus sterile bedding/unsterile soil before birth (DE) group; (B) SPF versus sterile bedding/sterile soil after weaning (SS) group; (C) sterile bedding/unsterile soil after weaning (SD) versus DE group; (D) SD versus SS group. All groups were fed either a normal diet (ND) or a high-fat diet (HF). n = 8, 9, or 10 per treatment.\n\nClick here to access the data.\n\nSupplementary Figure 7. PCoA of unweighted UniFrac distances between fecal microbiota of SPF and SD mice sampled over four weeks.\n\nThe analysis was based on the V4 region of bacterial 16S rRNA gene sequencing datasets. The different ages are color-coded. (A, B) sterile bedding/no soil (SPF) group fed a (A) normal or (B) high-fat diet; (C, D) SD group fed a (C) normal diet (CD) or (D) high-fat diet (HFD). n = 8, 9, or 10 per treatment.\n\nClick here to access the data.\n\nSupplementary Table 1. Diet formula.\n\nClick here to access the data.\n\nSupplementary Table 2. Sample information.\n\nClick here to access the data.\n\nSupplementary Table 3. Multiple phylum-level comparisons.\n\nClick here to access the data.\n\nSupplementary Table 4. Phylum abundance comparison.\n\nClick here to access the data.\n\nSupplementary Table 5. Multiple comparisons of diversity.\n\nClick here to access the data.\n\nSupplementary Table 6. Diversity index.\n\nClick here to access the data.\n\nSupplementary Table 7. Phylum abundance comparison between female and male mice.\n\nClick here to access the data.\n\nSupplementary Table 8. Random Forests for mice fed a high-fat diet.\n\nClick here to access the data.\n\nSupplementary Table 9. Random Forests for mice fed a normal diet.\n\nClick here to access the data.\n\nSupplementary Table 10. Random Forests for mice between diets.\n\nClick here to access the data.\n\nSupplementary Table 11. P-values of multiple comparisons for unweighted UniFrac distances.\n\nClick here to access the data.\n\nSupplementary File 1. Completed ARRIVE checklist.\n\nClick here to access the data.\n\n\nReferences\n\nLey RE, Turnbaugh PJ, Klein S, et al.: Microbial ecology: human gut microbes associated with obesity. Nature. 2006; 444(7122): 1022–1023. PubMed Abstract | Publisher Full Text\n\nTurnbaugh PJ, Ley RE, Mahowald MA, et al.: An obesity-associated gut microbiome with increased capacity for energy harvest. Nature. 2006; 444(7122): 1027–1031. PubMed Abstract | Publisher Full Text\n\nDominguez-Bello MG, Costello EK, Contreras M, et al.: Delivery mode shapes the acquisition and structure of the initial microbiota across multiple body habitats in newborns. Proc Natl Acad Sci U S A. 2010; 107(26): 11971–11975. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCostello EK, Stagaman K, Dethlefsen L, et al.: The application of ecological theory toward an understanding of the human microbiome. Science. 2012; 336(6086): 1255–1262. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBäckhed F, Roswall J, Peng Y, et al.: Dynamics and Stabilization of the Human Gut Microbiome during the First Year of Life. Cell Host Microbe. 2015; 17(5): 690–703. PubMed Abstract | Publisher Full Text\n\nKoenig JE, Spor A, Scalfone N, et al.: Succession of microbial consortia in the developing infant gut microbiome. Proc Natl Acad Sci U S A. 2011; 108 Suppl 1: 4578–4585. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYatsunenko T, Rey FE, Manary MJ, et al.: Human gut microbiome viewed across age and geography. Nature. 2012; 486(7402): 222–227. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavid LA, Maurice CF, Carmody RN, et al.: Diet rapidly and reproducibly alters the human gut microbiome. Nature. 2014; 505(7484): 559–563. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Filippo C, Cavalieri D, Di Paola M, et al.: Impact of diet in shaping gut microbiota revealed by a comparative study in children from Europe and rural Africa. Proc Natl Acad Sci U S A. 2010; 107(33): 14691–14696. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu GD, Chen J, Hoffmann C, et al.: Linking long-term dietary patterns with gut microbial enterotypes. Science. 2011; 334(6052): 105–108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCotillard A, Kennedy SP, Kong LC, et al.: Dietary intervention impact on gut microbial gene richness. Nature. 2013; 500(7464): 585–588. PubMed Abstract | Publisher Full Text\n\nKovatcheva-Datchary P, Nilsson A, Akrami R, et al.: Dietary Fiber-Induced Improvement in Glucose Metabolism Is Associated with Increased Abundance of Prevotella. Cell Metab. 2015; 22(6): 971–982. PubMed Abstract | Publisher Full Text\n\nWalker AW, Ince J, Duncan SH, et al.: Dominant and diet-responsive groups of bacteria within the human colonic microbiota. ISME J. 2011; 5(2): 220–230. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLey RE, Hamady M, Lozupone C, et al.: Evolution of mammals and their gut microbes. Science. 2008; 320(5883): 1647–1651. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuegge BD, Kuczynski J, Knights D, et al.: Diet drives convergence in gut microbiome functions across mammalian phylogeny and within humans. Science. 2011; 332(6032): 970–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCharbonneau MR, O'Donnell D, Blanton LV, et al.: Sialylated Milk Oligosaccharides Promote Microbiota-Dependent Growth in Models of Infant Undernutrition. Cell. 2016; 164(5): 859–871. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlanton LV, Charbonneau MR, Salih T, et al.: Gut bacteria that prevent growth impairments transmitted by microbiota from malnourished children. Science. 2016; 351(6275): pii: aad3311. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVijay-Kumar M, Aitken JD, Carvalho FA, et al.: Metabolic syndrome and altered gut microbiota in mice lacking Toll-like receptor 5. Science. 2010; 328(5975): 228–231. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBuffington SA, Di Prisco GV, Auchtung TA, et al.: Microbial Reconstitution Reverses Maternal Diet-Induced Social and Synaptic Deficits in Offspring. Cell. 2016; 165(7): 1762–1775. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFaith JJ, McNulty NP, Rey FE, et al.: Predicting a human gut microbiota's response to diet in gnotobiotic mice. Science. 2011; 333(6038): 101–104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoeth RA, Wang Z, Levison BS, et al.: Intestinal microbiota metabolism of L-carnitine, a nutrient in red meat, promotes atherosclerosis. Nat Med. 2013; 19(5): 576–585. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu GD, Compher C, Chen EZ, et al.: Comparative metabolomics in vegans and omnivores reveal constraints on diet-dependent gut microbiota metabolite production. Gut. 2016; 65(1): 63–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEge MJ, Mayer M, Normand AC, et al.: Exposure to environmental microorganisms and childhood asthma. N Engl J Med. 2011; 364(8): 701–709. PubMed Abstract | Publisher Full Text\n\nStein MM, Hrusch CL, Gozdz J, et al.: Innate Immunity and Asthma Risk in Amish and Hutterite Farm Children. N Engl J Med. 2016; 375(5): 411–421. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchnorr SL, Candela M, Rampelli S, et al.: Gut microbiome of the Hadza hunter-gatherers. Nat Commun. 2014; 5: 3654. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhou D, Zhang H, Bai Z, et al.: Exposure to soil, house dust and decaying plants increases gut microbial diversity and decreases serum immunoglobulin E levels in BALB/c mice. Environ Microbiol. 2016; 18(5): 1326–1337. PubMed Abstract | Publisher Full Text\n\nZhou D: Impact of sanitary living environment on gut microbiota. Precision Medicine. 2016; 2: e1161. Publisher Full Text\n\nBai Z, Zhang H, Li N, et al.: Impact of Environmental Microbes on the Composition of the Gut Microbiota of Adult BALB/c Mice. PLoS One. 2016; 11(8): e0160568. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAngenent LT, Kelley ST, St Amand A, et al.: Molecular identification of potential pathogens in water and air of a hospital therapy pool. Proc Natl Acad Sci U S A. 2005; 102(13): 4860–4865. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPenders J, Thijs C, Vink C, et al.: Factors influencing the composition of the intestinal microbiota in early infancy. Pediatrics. 2006; 118(2): 511–521. PubMed Abstract | Publisher Full Text\n\nWhitacre CC: Sex differences in autoimmune disease. Nat Immunol. 2001; 2(9): 777–780. PubMed Abstract | Publisher Full Text\n\nKnights D, Costello EK, Knight R: Supervised classification of human microbiota. FEMS Microbiol Rev. 2011; 35(2): 343–359. PubMed Abstract | Publisher Full Text\n\nAdlerberth I, Lindberg E, Aberg N, et al.: Reduced enterobacterial and increased staphylococcal colonization of the infantile bowel: an effect of hygienic lifestyle? Pediatr Res. 2006; 59(1): 96–101. PubMed Abstract | Publisher Full Text\n\nVatanen T, Kostic AD, d'Hennezel E, et al.: Variation in Microbiome LPS Immunogenicity Contributes to Autoimmunity in Humans. Cell. 2016; 165(4): 842–853. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMerrifield CA, Lewis MC, Berger B, et al.: Neonatal environment exerts a sustained influence on the development of the intestinal microbiota and metabolic phenotype. ISME J. 2016; 10(1): 145–157. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEggesbø M, Moen B, Peddada S, et al.: Development of gut microbiota in infants not exposed to medical interventions. APMIS. 2011; 119(1): 17–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartin R, Makino H, Cetinyurek Yavuz A, et al.: Early-Life Events, Including Mode of Delivery and Type of Feeding, Siblings and Gender, Shape the Developing Gut Microbiota. PLoS One. 2016; 11(6): e0158498. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBastide NM, Chenni F, Audebert M, et al.: A central role for heme iron in colon carcinogenesis associated with red meat intake. Cancer Res. 2015; 75(5): 870–879. PubMed Abstract | Publisher Full Text\n\nPotera C: Red Meat and Colorectal Cancer: Exploring the Potential HCA Connection. Environ Health Perspect. 2016; 124(10): A189. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKau AL, Ahern PP, Griffin NW, et al.: Human nutrition, the gut microbiome and the immune system. Nature. 2011; 474(7351): 327–336. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZeng Y, Du J, Pu X, et al.: Coevolution between Cancer Activities and Food Structure of Human Being from Southwest China. Biomed Res Int. 2015; 2015: 497934. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYangzong Y, Shi Z, Nafstad P, et al.: The prevalence of childhood asthma in China: a systematic review. BMC Public Health. 2012; 12: 860. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOtt W: Human activity patterns: A review of the literature for estimating time spend indoors, outdoors and in transit. Las Vegas. 1989.\n\nKilkenny C, Browne WJ, Cuthill IC, et al.: Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010; 8(6): e1000412. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZoetendal EG, Booijink CC, Klaassens ES, et al.: Isolation of RNA from bacterial samples of the human gastrointestinal tract. Nat Protoc. 2006; 1(2): 954–959. PubMed Abstract | Publisher Full Text\n\nBolger AM, Lohse M, Usadel B: Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014; 30(15): 2114–2120. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdgar RC, Haas BJ, Clemente JC, et al.: UCHIME improves sensitivity and speed of chimera detection. Bioinformatics. 2011; 27(16): 2194–2200. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang Q, Garrity GM, Tiedje JM, et al.: Naive Bayesian classifier for rapid assignment of rRNA sequences into the new bacterial taxonomy. Appl Environ Microbiol. 2007; 73(16): 5261–5267. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaporaso JG, Kuczynski J, Stombaugh J, et al.: QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010; 7(5): 335–336. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiaw A, Wiener M: Classification and regression by randomForest. R News. 2002; 2(3): 18–22. Reference Source"
}
|
[
{
"id": "38951",
"date": "03 Feb 2020",
"name": "Xuegong Zhang",
"expertise": [
"Reviewer Expertise Pattern Recognition",
"Machine Learning and Bioinformatics."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting paper that shows the influence of bedding soil on the experimental mice's microbiome, especially the differential effects on mice with different diet. The experiment results and statistical analyses are overall convincing, but there are a few minor points that the work can be further improved with more solid conclusions:\nThe paper only described the soils used \"was from the top 10cm of a farm ground with abundant goats, hens and ducks\". No further information about the procedures of sterilizing the soils were provided. If the authors can conduct a systematic analysis on the biological and chemical characteristics of sterile and unsterile soil samples, more concrete results might be observed.\n\nSome of the observations reported in the paper was based on p-values that are not very small, like p=0.042 or 0.019. It will be better if the authors also present the effect size of those tests and discuss the issue of multiple tests.\n\nIn beginning of the Introduction, the authors stated that \"The main contributing factors are delivery mode, age, antibiotic treatment, diet, and the living environment\". While those are the main factors that have been reported to affect gut microbiota, the statement was too strong and exclusive. Actually those factors only explain part of the variations, and there could be other important factors that have not been revealed yet.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "59635",
"date": "21 Feb 2020",
"name": "Jose F. Garcia-Mazcorro",
"expertise": [
"Reviewer Expertise Microbial ecology",
"bioinformatics."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very interesting study with lots of potential for multiples areas in medicine and biomedical sciences. I have only a few comments that may improve the manuscript.\nDo not use microflora.\n\nDo you have any information about the soil characteristics?\n\nI am missing more details in the discussion regarding the mechanism behind the effect of soil in their immediate environment. The authors talk about coprophagy but do not mention anything about whether they observed this behavior in the animals. Also, if possible, it would not take much time and money to analyze the microbiota in the soil and to investigate whether there are any shared OTUs. In this regard, I hope the following paper is somehow useful for the authors1. Is it possible that the soil changed some parameters and behavior of the immune system and therefore the effect on the microbiota could be indirect?\n\nCould you please add information about any OTUs that were shared among the animals?\n\nThe clustering of samples in Fig. 4 is simply too strong. For me, such a strong clustering would only be explained by processing and sequencing replicates of the same samples. Please explain anything you can add to explain such a strong clustering.\n\nPlease consider performing any additional analysis on the 16S sequences you think is appropriate (e.g. PICRUSt, BugBase, etc).\n\nThe link provided to SRA seems not to be working.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1588
|
https://f1000research.com/articles/7-1587/v1
|
01 Oct 18
|
{
"type": "Research Article",
"title": "Bird diversity of the wetland area in Suwi river, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia",
"authors": [
"Nur Linda Isa",
"Monica Kusneti",
"Rudy Agung Nugroho",
"Nur Linda Isa",
"Monica Kusneti"
],
"abstract": "The aim of the study was to determine the bird diversity in an essential ecological area, Suwi river, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia. The observation was performed at 5 locations using direct observation at two different times, 06.00-09.00 AM and 15.00-18.00 PM (Indonesia Central Standard Time-eight hours ahead or UTC+8 of GMT) from April 2017 to March 2018. The results stated that 63 species from 28 family were found with diversity index was 3.56. Fifteen birds species have a protected status according to PP no. 7 tahun 1999 (Government regulation document number 7 year 1999) about the preservation of plants and animals, while six species are in appendix two and one species is included in appendix one of the Convention on International Trade in Endangered (CITES). Appendix one includes species threatened with extinction. Trade in specimens of these species is permitted only in exceptional circumstances. Meanwhile, appendix two includes species not necessarily threatened with extinction, but in which trade must be controlled in order to avoid utilization incompatible with their survival. In addition, four species were of vulnerable status according to the International Union for Conservation of Nature (IUCN). This study provides information regarding the biodiversity of birds in an essential ecological area and contributes useful base line data for conservation activities.",
"keywords": [
"Birds",
"Suwi river",
"Muara Ancalong",
"Kutai Timur",
"Conservation"
],
"content": "Introduction\n\nKalimantan island, known as Borneo, has a high biodiversity that remains one of the most forested provinces in Indonesia1. Kalimantan, especially east Kalimantan, also has a close relationship with Mahakam river. In the lower part of the Mahakam river, there are several large seasonal lakes and hundreds of small lakes that form wetland area2. In addition, the lakes have several small rivers, both inflows and backflows, which flow from both the Kelinjau and Kedang Kepala Rivers3. One of these small rivers is called the Suwi River, located in Muara ancalong district, Kutai Timur regency, East Kalimantan Province, Indonesia.\n\nRecently, the wetland area in Suwi river and Mesangat lake have been proposed as an essential ecosystem (13.964,13 hectare) to support in the protection of both Crocodylus siamensis and Tomistoma schlegii, to prevent them from becoming extinct (see merdeka.com article). In addition, populations of fish can be also found in the Suwi River, which is surrounded by a large area of forest with populations of birds.\n\nThough some studies have been done in the Suwi River1,4, there is no information regarding the bird diversity in this area. Therefore, the aim of the current study was to inventory the bird diversity in this essential ecosystem of the Suwi River to support the conservation actions of Mesangat and Suwi wetland essential ecosystem, Muara Ancalong, Kutai Timur, Indonesia.\n\n\nMethods\n\nThis study was performed in the Suwi River, Muara Ancalong, Kutai Timur, Indonesia (Figure 1) from April 2017 to March 2018. The observation locations were\n\na. Upper stream (latitude: 0,42469; longitude 116,61273)\n\nb. Middle stream (latitude: 0,40816; longitude: 116,61749)\n\nc. Downstream (latitude: 0,38983; longitude: 116,60514)\n\nd. Ketiaw (latitude: 0,40638; longitude: 116,60346)\n\ne. Loa Bekara (latitude: 0,41527; longitude: 116,63173)\n\na. Upper stream (latitude: 0,42469; longitude 116,61273), Middle stream (latitude: 0,40816; longitude: 116,61749), Downstream (latitude: 0,38983; longitude: 116,60514), Ketiaw (latitude: 0,40638; longitude: 116,60346), Loa Bekara (latitude: 0,41527; longitude: 116,63173)\n\nDirect observation by researchers in the field at two different time points, 06.00–09.00 AM and 15.00–18.00 PM (Indonesia Central Standard Time-eight hours ahead or UTC+8 of GMT) was performed to collect the numbers of the individual birds and to identify species in each location. The equipment used included Binocular celestron, upclose G2 10-30×50 zoom porro (Torrance, California, United States) and Global Positioning System (GPSMAP) Garmin 78s. Images of found-birds were taken using a digital camera (Canon PowerShot SX520 HS PC2152 16.0MP Digital Camera, Canon, Inc., USA), To identify the species of the birds found, the guidelines book of Mackinnon et al.5 was used.\n\n\nData analysis\n\nThe diversity index was performed following Shannon Index of Diversity, with the equation of Ludwig and Reynolds6 using Microsoft Excel 2013:\n\nH’ = Σ (ni/N) log (ni/N)\n\nwhere :\n\nH’ = Shannon-Weiner index diversity\n\nni = the number of individuals found for a species\n\nN = the number of individuals from all species\n\n\nResults and discussion\n\n63 bird species from 27 family were found during the study (Table 1). The full data of all birds found in the Suwi River wetland, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia can be seen in Dataset 17.\n\nBird species that frequently were found in each observation location were Pelargopsis capensis (n = 35), followed by Ardeola speciosa (n = 31), Treron vernans (n = 29), Anhinga melanogaster (n = 25) and Clamator coromandus (n = 20).\n\nThe presence of Pelargopsis capensis, Anhinga melanogaster and Ardeola speciosa is likely due to the abundance of fish in the observation locations. Meanwhile, Treron vernans is a seed and fruit-eating bird species that can be observed in the Suwi river wetland area where local tree species such as Ficus sp. and Malotus sumatranus can be widely found. Further, Coromandus Clamator is a small insectivorous bird species and lives among trees, mangroves, farmland and bushes. Some of the birds that were observed during the study are categorized in the threatened-list of International Union for Conservation of Nature (IUCN), Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), or Indonesia government regulation document (PP) number 7 year 1999 about the preservation of plants and animals8 (Table 2).\n\nIUCN = International Union for Conservation of Nature (IUCN), CITES = Convention on International Trade in Endangered Species of Wild Fauna and Flora, PP No 7 1999 = Indonesia Government regulation document (PP) number 7 year 1999 about preservation of plant and animal. LR/Lc = Lower risk/ Least Concern, LR/nt = Lower risk/near threaten, VU = Vulnerable, NL = Not on the List AI = Appendix I, AII = Appendix II, P = Protected, NP = not protected\n\nIn the present study 63 species were identified, of which 15 species have protected status according to the Indonesia Government regulation document (PP) number 7 year 1999 on the preservation of plants and animals. Moreover, 6 species are categorized in appendix II of CITES indicating that the species are not necessarily threatened with extinction, but the trade of these animals must be controlled and regulated in order to avoid utilization incompatible with their survival. Furthermore, only one species is classified in appendix I of CITES, meaning the species is threatened with extinction. Trade in specimens of this species is only permitted under special circumstances. 4 species were found to be vulnerable according to the IUCN.\n\nThe diversity index of birds of the Suwi River, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia was found to be 3.56. According to Shannon-Wiener index, the diversity of birds of the Suwi river, Muara ancalong was H > 3, a high diversity index. However, illegal hunting for commercial use and land-use change for palm oil plantations might threaten the habitat of the birds. Thus, conservation actions and further regulation should be considered.\n\n\nConclusion\n\nThe first report of bird diversity an essential ecological area of the Suwi River wetland, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia has found it contains a number of birds that are protected according to the IUCN, CITES and Indonesia government document. Though the diversity index of bird is classified as high, conservation action should be taken into consideration in order to protect the birds as well as their natural habitat.\n\n\nData availability\n\nDataset 1: List of found-birds during observation between April 2017 and March 2018 in the Suwi River wetland, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia. 10.5256/f1000research.16251.d2195717\n\nDataset 2: Pictures of found-birds during observation between April 2017 and March 2018 in the Suwi River wetland, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia. 10.5256/f1000research.16251.d2195869",
"appendix": "Grant information\n\nThis work was support by Keidanren Nature Conservation Fund (KNCF) [2007-050]\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors thank to Keidanren Nature Conservation Fund (KNCF) for grant support and Faculty of Mathematics and Natural Sciences, Mulawarman University, Samarinda, East Kalimantan.\n\n\nReferences\n\nNugroho RA, Santoso YGG, Nur FM, et al.: A preliminary study on the biodiversity of fish in the Suhui River, Muara Ancalong, East Kutai, Indonesia. Aquac Aquar Conserv Legis. 2016; 9(2). Reference Source\n\nChokkalingam U, Kurniawan I, Ruchiat Y: Fire, livelihoods, and environmental change in the middle Mahakam peatlands, East Kalimantan. Ecol Soc. 2005; 10(1): 26. Publisher Full Text\n\nStuebing R, Sommerlad R, Staniewicz A: Conservation of the Sunda gharial Tomistoma schlegelii in Lake Mesangat, Indonesia. Int Zoo Yearb. 2015; 49(1): 137–149. Publisher Full Text\n\nWahyudi D, Kusneti M, Suimah: Biodiversity inventory and conservation opportunity of Suwi wetlands, Muara Ancalong, East Kalimantan, Indonesia. In AIP Conference Proceedings. AIP Publishing. 2017; 1813(1): 020013. Publisher Full Text\n\nMacKinnon JR, Phililipps K, Balen S: Burung-burung di Sumatera, Jawa, Bali dan Kalimantan: termasuk Sabah, Sarawak dan Brunei Darussalam (The birds in Sumatera, Java, Bali, and Borneo Island, including Sabah, Serawak, and Brunei Darussalam). [GEF Biodiversity Collections Project], Puslitbang Biologi-LIPI [=Pusat Penelitian dan Pengembangan Biologi, Lembaga Ilmu Pengetahuan Indonesia]. 1999. Reference Source\n\nLudwig JA, Reynolds JF: Statistical Ecology. A Primer on Methods and Computing. New York: Wiley. 1988; 337. Reference Source\n\nIsa NL, Kusneti M, Nugroho RA: Dataset 1 in: Bird diversity of wetland area in Suwi river, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16251.d219571\n\nPP, Peraturan Pemerintah Nomor 7 Tahun 1999: Pengawetan Jenis Tumbuhan dan Satwa. K.H.d.H.A.M.R. Indonesia, Editor 2018, Kementerian Hukum dan Hak Asasi Manusia Republik Indonesia: Jakarta, Indonesia. Reference Source\n\nIsa NL, Kusneti M, Nugroho RA: Dataset 2 in: Bird diversity of wetland area in Suwi river, Muara Ancalong, Kutai Timur, Kalimantan Timur, Indonesia. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16251.d219586"
}
|
[
{
"id": "38905",
"date": "17 Oct 2018",
"name": "Huilquer Francisco Vogel",
"expertise": [
"Reviewer Expertise Bird Ecology"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is an important report for activities of environmental planning of flooded areas of the Suwi river.\nIntroduction:\nBeyond the importance for wildlife, is there any direct use by the population?\n\nIn the second paragraph, the citation of “(see merdeka.com article)” could be revised and better detailed in the text.\n\nWhat are the threats to the wetlands on the site? What is the importance of habitat for birds in this location? Are there too many nestling sites? Is it one of the largest bird feeding areas in the region?\n\nMethods:\nA very important area for biodiversity, but no environmental descriptions are provided. When dealing with flooded areas, it is not mentioned if the ecosystem works by flooding pulse. What is the climate? I missed a detail on the map by locating the place in the world. It should explain the map better.\n\nIn data collection the sampling efforts are not sufficiently explained. I think it should be \"heavier\" in the methodology to make the work at least replicable. So you could put the coordinates in a small table. How do you know if sampling was enough? How many events do you sample?\n\nFrom the analysis point of view, instead of applying the diversity index, it would be more interesting to rank the species in order to know which are the most abundant and rare species groups.\n\nResults:\nI believe that the documents used for the list of endangered species could already have been mentioned in the methodology. From the ecological point of view, which should be discussed better, which species are unique to wetlands and which are generalists?\n\nThere is also one incorrect spelling of a scientific name.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1587
|
https://f1000research.com/articles/7-1235/v1
|
10 Aug 18
|
{
"type": "Software Tool Article",
"title": "TRGAted: A web tool for survival analysis using protein data in the Cancer Genome Atlas.",
"authors": [
"Nicholas Borcherding",
"Nicholas L. Bormann",
"Andrew P. Voigt",
"Weizhou Zhang",
"Nicholas Borcherding",
"Nicholas L. Bormann",
"Andrew P. Voigt"
],
"abstract": "Reverse-phase protein arrays (RPPAs) are a highthroughput approach to protein quantification utilizing an antibody-based micro-to-nano scale dot blot. Within the Cancer Genome Atlas (TCGA), RPPAs were used to quantify over 200 proteins in 8,167 tumor or metastatic samples. This protein-level data has particular advantages in assessing putative prognostic or therapeutic targets in tumors. However, many of the available pipelines do not allow for the partitioning of clinical and RPPA information to make meaningful conclusions. We developed a cloud-based application, TRGAted to enable researchers to better examine survival based on single or multiple proteins across 31 cancer types in the TCGA. TRGAted contains up-to-date overall survival, disease-specific survival, disease-free interval and progression-free interval information. Furthermore, survival information for primary tumor samples can be stratified based on gender, age, tumor stage, histological type, and subtype, allowing for highly adaptive and intuitive user experience. The code and processed data is open sourced and available on github and with a tutorial built into the application for assisting users.",
"keywords": [
"Bioinformatics",
"Cancer Proteomics",
"Survival Analysis",
"TCGA"
],
"content": "Introduction\n\nImproving prognostic predictions and the identification of potential therapeutic targets is of particular interest to clinicians. Quantification of messenger RNA levels at a genome-wide level has proven valuable in the discovery of gene expression profiles, which can serve as biomarkers for clinical outcomes in cancer1. However, RNA quantification of tumor or patient cohorts is a proxy for protein level, with many cellular processes above transcription that ultimately regulate protein level. The availability of protein-level quantification for the TCGA cohorts allow for more relevant clinical outcome predictions compared to mRNA levels. Currently available applications provide entry-level analysis in correlational, differential, and survival modalities for the RPPA information. However, survival analysis in these applications rely on median- or mean-based survival data and do not allow for the use of clinical variables2–4.\n\nWith these limitations in mind, we developed a new open-source web application, TRGAted (Figure 1). Built on the R shiny framework, TRGAted is an intuitive data analysis tool for parsing survival information based on over 200 proteins in 31 cancer types. TRGAted is comprised of processed RPPA information, survival information, and code, allowing users to run instances locally or modify the code with ease.\n\nEach file communicates within the R Shiny framework. On the user side (left, blue), users select pertinent cancer type, protein of interest, and clinical variables into the CSS-enabled user interface. This information is received by the server file enabling the subsequent run in R. On the server side (right, orange), the specific cancer type from the database, R packages, and functions are retrieved and executed. After execution, the server file provides both tabular and graphical output (purple) to the user interface and is displayed.\n\n\nMethods\n\nLevel 4 TCGA RPPA data for each cancer type was downloaded from the TCPA Portal developed by the MD Anderson Cancer Center4. Across all proteins, individual values were scaled using Z-scores. A summary of information available for each cancer datasets is in Table 1. Additionally, uveal melanoma (UVM) was excluded from the datasets due to a low number of samples with RPPA quantification (n=12). Clinical and survival information for each cancer data set were downloaded from recently updated TCGA clinical data5. Overall survival, disease-specific survival, disease-free interval, and progression-free interval information was added to primary tumor RPPA quantifications for each cancer type. Unlike other cancer types, metastatic samples were kept in the skin cutaneous melanoma (SKCM) RPPA-based dataset due to the highly metastatic nature of the disease. SKCM in the TRGAted application consists of 96 primary tumor samples and 258 metastatic samples. Of the 8,167 samples available in the TCPA, overall survival (OS) data was available for 7,714 patients, disease-specific survival (DSS) data was available for 7,240 patients, disease-free interval (DFI) data was available for 3,887 patients, and progression-free interval (PFI) data was available for 7,315 patients (Table 1).\n\nOS, overall survival; DSS, disease-specific survival; DFI, disease-free interval; PFI, progression-free interval.\n\nThe TRGAted application was written and tested using R v3.5.1. The interactive plots are made using shiny (v1.1.0) and ggplots2 (3.0.0). Plots can be downloaded as .png, .pdf, or .svg files. Data used to generate the individual plots can be downloaded as .csv files.\n\nOperation: Minimum system requirements for running TRGAted locally are modest and include an Intel-compatible CPU and 1 gigabyte of RAM. Running TRGAted from the shiny server requires a modern browser and an internet connection.\n\nKaplan-Meier survival curves can be generated by selecting the cancer type, survival type and protein(s) of interest (Figure 2). Kaplan-Meier curves are generated using the survival (v2.41-3) and the survminer (0.4-1) R packages. Multi-protein survival analysis utilizes mean values of protein probes, similar to gene-expression-based survival analysis platforms6. Hazard ratio for two-group comparisons, either median or optimal cut-off, utilize the Cox proportional hazards regression model in the survival R package; with the reported hazard ratio comparing high versus low protein groups. Optimal cut-off feature uses the surv_cutpoint function of the survminer package, calculating the minimal p-value based on the log-rank method. This function uses the maximally selected rank statistic (maxstat, v0.7-25) R package, which finds the maximal standardized two-sample linear rank statistic7. In order to find clinically or biologically meaningful biomarkers, the minimal proportion cutpoint, or the maximal disparity comparison, was set at 15% versus 85% of samples. Clinical variables dependent on the cancer type selected, can be used to filter patients into user-defined groupings. Clinical information available across all types include: subtype, tumor stage, histological type, gender, age, response to primary therapy.\n\nThe interface shows an example of an overall survival curve for the RAD50 protein in the basal subtype of breast cancer using the optimal cutpoint (A). Disease-specific survival, disease-free interval, and progression-free interval can also be selected (B). The cutpoint can be varied to separate samples based on protein level into quartiles, tertiles, medians or separating into two groups based on the lowest p-value (C).\n\nTRGAted also allows for Cox proportional hazard modeling across all proteins in each cancer type or for a single protein across all cancer types. Hazard ratios and P values are based on the Cox regression model. Values filtered from the volcano plots are proteins with –log10(p-values) less than 0.1 and hazard ratios greater than 20. These filters were implemented to improve visualization and to reduce artifacts of the analysis pipeline, respectively. The volcano plot can be graphed as linear or natural-log transformed, to assist in the visualization of good prognostic indicators. Visualizing the proportional comparison for the volcano plots is also available.\n\nIn order to demonstrate the functionality of TRGAted, we present a basic survival analysis of examining the aggressive, highly-metastatic subtype of breast cancer, known as basal-like breast cancer. We found in this cancer, RAD50, involved in homologous recombination of DNA, as a novel poor prognostic marker.\n\nSurvival curves: Survival curves can be generated by selecting the cancer type, survival type, and protein or proteins of interest (Figure 2A). We also selected the subtype information to more closely examine basal-like breast cancer. Other survival types and clinical variables can be selected (Figure 2B). The division of samples is available into quartiles, tertiles, median or optimum based on the protein of interest (Figure 2C). Here we can see that the DNA repair protein, RAD50 is a poor prognostic marker for overall (Figure 2A) and disease-specific survival (Figure 2B) in basal-like breast cancer.\n\nAcross cancer: TRGAted can be used for biomarker discovery by examining the hazard ratios for all proteins available by cancer subtype, like basal-like breast cancer (Figure 3A). The volcano plot displays good prognostic markers on the left in blue and poor prognostic markers on the right in red. Having selected the optimal cutoff feature, a bar chart can also be generated to examine the proportion of samples in the high and low proportion groups (Figure 3B). Protein labeling is adaptive for both the volcano plot and bar chart and will only label significant proteins. Here we see the RAD50 is one of the most significant predictors of poor overall survival in basal-like breast cancer (Figure 3A and B).\n\nThe interface shows an example of the visualization of Cox hazard ratio of each protein across the basal subtype of breast cancer (A). Good prognostic markers appear on the left in blue, while poor prognostic markers are on the right in red. The natural log transformation allows the graph to be centered at 0 and makes the visualization of good prognostic markers easier. Labeling for proteins can be adjusted to include more or less protein. Proportional comparisons for protein using the optimal cutpoint function is available as well (B).\n\nAcross protein: TRGAted can also be used to examine the survival outcomes of a protein of interest across multiple cancers. Here, RAD50 predicts poor survival in only five cancer types, prostate, adrenocortical, breast cancer, low-grade glioma and head and neck cancers (Figure 4A). A summary of the hazard ratios can also be visualized by selecting for the barplot function (Figure 4B).\n\nThe interface shows an example of the visualization of Cox hazard ratio of for RAD50 across all 31 cancer types (A). This feature is similar to the Across Cancer tab with the ability to adjust labels and log-transform the Cox hazard ratios. Additionally, the hazard ratios for significant cancer types can be visualized using a bar chart (B).\n\n\nConclusions\n\nTRGAted is an open-source survival analysis application designed to allow for quick and intuitive exploration of TCGA protein-level data. This survival analysis improves on current TCGA pipelines by providing greater diversity of clinical and survival options and relying on protein-level data. In addition to log-rank and Cox regression modeling, TRGAted allows users to download graphical displays and processed data for up to 7,714 samples across 31 cancer types. Built on the R Shiny framework, a literate code architecture, the code for TRGAted is annotated and easily modified from our GitHub repository. Under the GNU General Public License v3.0, we encourage interested groups to modify TRGAted for greater usability. Downloading and modifying TRGAted is streamlined by the relatively small size of TRGAted, totally 27.2 megabytes for the application, processed data, and built-in instructional guide.\n\n\nData availability\n\nRelease 4.2 of the TCGA replicate-based normalized (level 4) RPPA data is available for 32 cancer types from the TCPA Portal at http://tcpaportal.org/tcpa/download.html. Processed data is available at https://github.com/ncborcherding/TRGAted.\n\n\nSoftware availability\n\nSource code is available from GitHub: https://github.com/ncborcherding/TRGAted/tree/v1.0.0\n\nArchived source code at time of publication: http://doi.org/10.5281/zenodo.13238288\n\nLicense: GNU General Public License v3.0",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed\n\n\nGrant information\n\nFunding for this project was provided from National Institute of Health F30 fellowship [CA206255] to N.B. and NIH Grant R01 [CA200673] to W.Z.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThese results here are in whole or part based upon data generated by the TCGA Research Network.\n\n\nReferences\n\nXi X, Li T, Huang Y, et al.: RNA Biomarkers: Frontier of Precision Medicine for Cancer. Noncoding RNA. 2017; 3(1). pii: E9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCerami E, Gao J, Dogrusoz U, et al.: The cBio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data. Cancer Discov. 2012; 2(5): 401–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGao J, Aksoy BA, Dogrusoz U, et al.: Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal. Sci Signal. 2013; 6(269): pl1. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi J, Lu Y, Akbani R, et al.: TCPA: a resource for cancer functional proteomics data. Nat Methods. 2013; 10(11): 1046–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu J, Lichtenberg T, Hoadley KA, et al.: An Integrated TCGA Pan-Cancer Clinical Data Resource to Drive High-Quality Survival Outcome Analytics. Cell. 2018; 173(2): 400–416.e11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGyörffy B, Lanczky A, Eklund AC, et al.: An online survival analysis tool to rapidly assess the effect of 22,277 genes on breast cancer prognosis using microarray data of 1,809 patients. Breast Cancer Res Treat. 2010; 123(3): 725–31. PubMed Abstract | Publisher Full Text\n\nWright MN, Dankowski T, Ziegler A: Unbiased split variable selection for random survival forests using maximally selected rank statistics. Stat Med. 2017; 36(8): 1272–84. PubMed Abstract | Publisher Full Text\n\ntheHumanBorch: ncborcherding/TRGAted: First Release TRGAted (Version v1.0.0). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1323828"
}
|
[
{
"id": "37844",
"date": "03 Sep 2018",
"name": "Jean Claude Zenklusen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis report by Borcherding et al. deals with the creation of a tool to visualize the impact of proteins represented in the Reverse Phase Protein Array (RPPA) on the survival of patients used in The Cancer Genome Atlas (TCGA). The tools are straight forward, uses a common standard (it is an R module) and thus has the potential of being highly utilized by the cancer research community. There are no major flaws with the module, the code is deposited in github, allowing easy access to users.\n\nTwo minor issues need to be corrected:\nThe manuscript will benefit from editing by a native English speaker. Phrasing and grammar are uncommon at times. Reference 5 is referred as “updated TCGA clinical data”. This is incorrect. The paper referred to is an interpretation of the clinical data in the context of the Pan Can Atlas effort by the TCGA, but it is NOT the official clinical data. It is a derived product of it.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "4017",
"date": "01 Oct 2018",
"name": "Nicholas Borcherding",
"role": "Author Response",
"response": "Thank you for your very kind review and suggestions. In the most recent submission, we have addressed your concerns in editing the manuscript and adding additional details on the source of clinical information."
}
]
},
{
"id": "38311",
"date": "26 Sep 2018",
"name": "Austin Gillen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, Borcherding, et al. describe an interactive web interface (implemented in R using Shiny) that allows for the visualization of cancer patient survival data from TCGA (The Cancer Genome Atlas) based on protein expression measured by RPPA (Reverse-Phase Protein Array). The software is easy to use, well documented, and flexible enough for most common use cases. The code is available in a public github repository, encouraging further development and expansion of the tool to suit users' needs. There are no major flaws in either the implementation of the tool or the associated manuscript, but two minor issues should be addressed:\n1. As noted by reviewer 1, the manuscript should be carefully proofread for typographical errors and standard english grammar. For example: the common R package ggplot2 is referred to in the text as \"ggplots2\".\n2. The web interface indicates that the TCPA data included with the package were downloaded on 2017/11/10, but the current TCPA release (4.2) was made available on 2018/07/18. This tool is substantially less useful if it is not updated when new source data is released. A plan for updating the packaged TCPA and survival data should be included in the manuscript (and implemented in the package). Automating this process as a function in the package would be ideal, but detailed instructions for updating the packaged data for local installations would be acceptable as well.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "4018",
"date": "01 Oct 2018",
"name": "Nicholas Borcherding",
"role": "Author Response",
"response": "We would like to thank the reviewer for the great suggestions. We have recently submitted an updated version of the manuscript with more thorough editing. Additionally, we will work on implementing an automatic pull feature for the data to ensure the most up-to-date protein data available. The specific of this feature will be updated on the github repository and in the application itself when we implement the new pipeline. This was an excellent suggestion."
}
]
}
] | 1
|
https://f1000research.com/articles/7-1235
|
https://f1000research.com/articles/6-2073/v1
|
30 Nov 17
|
{
"type": "Review",
"title": "Dyslipidemia: Genetics, lipoprotein lipase and HindIII polymorphism",
"authors": [
"Marcos Palacio Rojas",
"Carem Prieto",
"Valmore Bermúdez",
"Carlos Garicano",
"Trina Núñez Nava",
"María Sofía Martínez",
"Juan Salazar",
"Edward Rojas",
"Arturo Pérez",
"Paulo Marca Vicuña",
"Natalia González Martínez",
"Santiago Maldonado Parra",
"Kyle Hoedebecke",
"Rosanna D’Addosio",
"Clímaco Cano",
"Joselyn Rojas",
"Carem Prieto",
"Valmore Bermúdez",
"Carlos Garicano",
"Trina Núñez Nava",
"María Sofía Martínez",
"Juan Salazar",
"Edward Rojas",
"Arturo Pérez",
"Paulo Marca Vicuña",
"Natalia González Martínez",
"Santiago Maldonado Parra",
"Kyle Hoedebecke",
"Rosanna D’Addosio",
"Clímaco Cano",
"Joselyn Rojas"
],
"abstract": "The direct link between lipid metabolism alterations and the increase of cardiovascular risk are well documented. Dyslipidemias, including isolated high LDL-c or mixed dyslipidemia, such as those seen in diabetes (hypertriglyceridemia, high LDL-c or low HDL-c), correlate with a significant risk of cardiovascular and cerebrovascular disease worldwide. This review analyzes the current knowledge concerning the genetic basis of lipid metabolism alterations, emphasizing lipoprotein lipase gene mutations and the HindIII polymorphism, which are associated with decreased levels of triglycerides and LDL-c, as well as higher levels of HDL-c. These patterns would be associated with decreased global morbidity and mortality, providing protection against cardiovascular and cerebrovascular diseases.",
"keywords": [
"Dyslipidemia",
"Polymorphisms",
"HindIII",
"Lipoprotein Lipase",
"coronary artery disease"
],
"content": "Dyslipidemia: The current status\n\nThe relationship between dyslipidemia and atherosclerosis continues to be an area of active research, since the prevalence of atherosclerosis and associated cardiovascular complications continue to increase in the industrialized world1. Cardiovascular disease (CVD) constitutes the greatest cause of morbidity and mortality globally with a high incidence in countries of all economic categories2. Evidence supporting a causal relationship between lipid profile abnormalities and the risk of coronary artery disease (CAD) is overwhelming, confirming that hypercholesterolemia is an independent risk factor for CVD3–5. In addition, hypertriglyceridemia and mixed dyslipidemias have been associated with the aggregation of metabolic risk factors, like hypertension (HTN)6 and obesity7.\n\nDyslipidemias are a group of metabolic derangements characterized by any or a combination of the following: elevated low density lipoprotein (LDL-c) (>130md/dL), elevated total cholesterol (>200 mg/dL), elevated TG (>150mg/dL), or low high density lipoprotein (HDL-c) (<40mg/dL in men and <50mg/dL in women)8.\n\nThe worldwide prevalence of dyslipidemia varies between different individuals, depending on race, age, socio-economic and cultural factors, lifestyle and genetics. This prevalence has increased significantly in growing cities with economic growth9. According to the National Health and Nutrition Examination Survey (NHANES) 2003–2006, 53.0% of the adult population in United States has some form dyslipidemia10; however, a lower prevalence have been reported for other countries, for example Canada and South Korea, with 45% and 44.1%, respectively8,11. De Souza et al. studied a sample of 1,039 patients and reported that the most common dyslipidemias in Brazil were isolated low HDL-c (18.3%), hypertriglyceridemia (17.1%), and isolated hypercholesterolemia (4.2%)12. These results are similar to those reported by Aguilar-Salinas et al., in which the incidence of dyslipidemia in a group of 4,040 Mexican patients was 60.3%, from which low HDL-c represented 60.3%, hypercholesterolemia 43.6% and hypertriglyceridemia 31.5%13. Another cross-sectional and descriptive study carried out in 318 patients from Cuenca, Ecuador, found that 82.4% of individuals (86.8% in females and 76.5% in males) had some type of dyslipidemia. Isolated low HDL-c was the most prevalent abnormality and it was significantly associated with obesity (OR: 3.99. CI: 95%. 1.65-9.36; p<0.01)14.\n\nIn Venezuela, the CARMELA study evaluated the prevalence of lipid metabolism disorders in the city of Barquisimeto of Lara state, reporting one of the highest percentages in the country, with a 50.4% prevalence of dyslipidemia in this population15. Nevertheless, a study by Linares et al.16 with a sample of 2,230 individuals from Maracaibo City, Venezuela, identified the overall dyslipidemia prevalence was even higher at 84.8% (n=1892), where 88% of females and 81.4% of males were found to have dyslipidemia. High LDL-c was the most frequent abnormality found in this population (20%), followed by the combination of low HDL-c with high LDL-c (19%) and hypertriglyceridemia with high LDL-c and low HDL (16.2%). Bermúdez et al.17 found that low HDL-c was statistically associated with obesity, ethnic group, alcohol consumption, and elevated TG.\n\n\nDyslipidemia genetics\n\nThe association between family history of dyslipidemia and the risk of CVD is supported by a large body of evidence18–22. Additionally, the great advancement in DNA analysis techniques has aided research surrounding CVD and related genetics and epigenetics. Understanding gene mutations or polymorphisms involved in the synthesis, transport, and metabolism of lipoproteins allows recognition of potential therapeutic targets and alternative treatments through identification of new molecules1,3,20.\n\nDyslipidemia is one of the most well characterized cardiovascular risk factors19,20. This not only depends on diet, but also on the synthesis and metabolism of lipoproteins conditioned by gene expression. Given the importance and the great diversity of proteins that participate in lipid metabolism, one might expect that a single defect in any step of gene expression would affect the quantity or quality of the product and potentially predispose to dyslipidemias and CVD19.\n\nOne genetic abnormalities associated with low HDL-c and increased CVD risk is the Taq IB polymorphism located in chromosome 16q21. This gene alters cholesteryl ester protein transferase (CEPT), which decreases HDL-c concentration23. Some deletions, inversions, and substitutions of the APO AI-IV, CII, and CIII genes are also associated with both premature CVD and low HDL-c24,25. Total deficiency of lecithin cholesterol acyl transferase (LCAT) can be seen after transition of C→T in codon 147 of exon 4 (W147R), G→A in codon 293 of exon 6 (M293I), as well as partial deficiencies of LCAT due to transition of C→T. Additionally, the substitution of threonine for isoleucine in codon 123 (T123I) causes decreased HDL-c and higher cholesterol in the intima of arterial vessels26,27.\n\nBelow, some of the genetic alterations associated with low levels of HDL-c and a higher risk of CVD are highlighted:\n\n1. CETP. The transcript of this gene mediates the exchange of lipids between lipoproteins, resulting in a net transference of cholesteryl esters from HDL to other lipoproteins and the capture of cholesterol in the liver. High levels of CETP lead to HDLs rich in TG, making a substrate for hepatic lipases, so that TG are hydrolyzed and ApoA-I is degraded in renal tubule cells. The subsequent decrease in HDL-c concentration creates a pro-atherogenic environment28. This occurs when CETP reaches a high level of expression in some individuals with polymorphisms in the codifying CETP gene (16q21). Being the most frequently occurring and best characterized polymorphism on intron 1, TaqIB is associated with the development of early atherosclerosis23.\n\n2. Familial hypoalphalipoproteinemia and HDL-C deficiency. Approximately 50% of HDL-c alterations are explained by polygenic defects in various chromosomal loci that control apolipoprotein expression (A-I, A-II, C-II, C-III and Apo A-IV) and LCAT. Multiple genetic defects have been reported, such as deletions, inversions and substitutions on codifying genes of apolipoproteins, which are all associated with premature arterial disease24,25.\n\n3. LCAT. This liver-synthesized enzyme circulates in plasma forming complexes with HDL and participating in the inverse transport of cholesterol. An LCAT deficiency results in the accumulation of free cholesterol on tissues. Insertions and substitutions in the LCAT gene may cause inactivation of the protein. Some of the reported mutations are C→T transitions on codon 147 of exon 4 (W147R), G→A on codon 293 of exon 6 (M293I), and the insertion of 3 pair of bases on exon 4, introducing a glycine on a helicoidally region of the protein and a substitution N228K24,25. The best characterized mutation is a C→T transition that results in a substitution of threonine for isoleucine in codon 123 (T123I) of the protein, resulting in a partial deficiency of LCAT29,30.\n\nThe following are some genetic alterations associated with hypercholesterolemia and hypertriglyceridemia, including their relationship with increased cardiovascular risk:\n\n1. LDLR gene – LDL-c receptor and familial hypercholesterolemia. LDL-c is a macromolecular complex that transports cholesterol and cholesteryl esters from the liver to other peripheral tissues, where cholesterol is introduced to the cells through LDL receptors (LDLR). LDL binds to its receptors before internalization by endocytosis31. This transport represents the principal mechanism that regulates cholesterol concentration in plasma. Any defect in this transport results in hypercholesterolemia. Mutations in the LDLR gene that codifies the LDL-c receptor is one the best characterized genetic defects causing dyslipidemia. This autosomal dominant condition is called familial hypercholesterolemia32,33. Familial hypercholesterolemia is characterized by elevated levels of cholesterol and LDL-c as a consequence of defects in cholesterol transportation, receptor deficits, or a functional alteration of cellular receptors.\n\n2. APO B-100 gene – ligand of LDL-c receptor and Familial Apolipoprotein B dysfunction (hypercholesterolemia type B). An inadequate cholesterol transport also caused by genetic defects in the ligand of LDLR, the APO B-100. This autosomal dominant defect, also known as familial APO B-100 dysfunction, comes from mutations in APO B-100 gene, in the short arm of chromosome 234. The first mutation described is a G→A transition that results in a substitution of Arg3500→Gln on the APO B-100 receptor of LDL-c33,35. Similarly, there are mutations associated with hypercholesterolemia and elevated TG that correlate with elevated CVD risk. Mutations of the LDL-r gene is the cause of familial hypercholesterolemia, which causes elevation of both total and LDL-c cholesterol27,28, and APO B-100 mutations located in p2 (transition G→A is the best known), resulting in a substitution of Arg3500→G1n in the region of APO B-100 that binds to LDL-c29, leading to type B hypercholesterolemia35.\n\n3. APO E gene – Apolipoprotein E and hyperlipoproteinemia or hyperlipidemia type III. Apolipoprotein E (ApoE) is a principal component of chylomicrons (CMs), very low density lipoprotein (VLDL) and some HDL-c. Its main function is the hepatic clearance of CMs and VLDL, as well as lipolysis by the same lipoprotein lipase (LPL)36. In hyperlipoproteinemia or hyperlipidemia type III, plasma levels of cholesterol and TG increases as a consequence of defective transport of CMs and VLDL, due to a defect in the ApoE gene located on the short arm of chromosome 19 (19q13.2)37,38. Polymorphisms in the codifying gene of ApoE (alleles ε2, 3 y 4)39,40 are associated with variations in plasma levels of cholesterol, where the individuals with ε2 allele have 10% lower cholesterol levels compared to those who express the ε4 allele, leading to cholesterol values 10% above the mean of homozygous individuals for ε3., leads to hyperlipoproteinemia or type III hyperlipidemia with elevated total cholesterol and elevated TG41,42.\n\n4. Lp(a) gene - lipoprotein (a)– Lp(a). Lipoprotein(a) is composed by a common low density lipoprotein (LDL-col) nuclei linked to an apolipoprotein (a) [Apo(a)] by disulfide bonds between a cysteine in the Kringle-IV type 9 (Cys 67) and the cysteine 3734 in Apo B-10043. Structurally, Apo(a) is composed of heavily glycosylated tridimensional structures called “Kringle” because of their similarity with a looped Danish pastry. Each Kringle contains a mean of 80 amino acids stabilized by 3 internal disulfide bonds, which finally surround the LDL molecules43. Apo(a) has high structural similarity with plasminogen, a key proenzyme of the fibrinolytic pathway44. Kringle IV domains are classified into 10 distinct subclasses, which compose most of the Apo(a) molecule, plus a linked Kringle V domain that resembles the catalytic region of plasminogen. The Kringle IV type 2 domain gene can be expressed a different number of times, resulting in a variable copy number of this structure (3–40 copies) within the Lp(a) molecule44. This determines the basis for the isoform size heterogeneity of Apo(a), whereas, the remaining 9 subtypes of Kringle IV are expressed just in a single copy into the Apo(a) molecule. Lp(a) is one of the most important cardiovascular risk markers45,46 and to date, some polymorphisms in the Apo(a) gene, located on chromosome 6 (6q26-q27) have been identified. For example, KIV-2 CNV consists of variable numbers of repeated units of module 4 and the number of repetitions inversely correlates with plasma levels of Lp(a)47.\n\n5. HL gene - hepatic lipase and phenotype of combined familial hyperlipidemia. Combined familial hyperlipidemia is a genetic lipid disorder that accounts for 10–20% of premature CAD worldwide. Affected individuals’ exhibit hypercholesterolemia and/or hypertriglyceridemia and elevated concentrations of APO B, with low values of HDL-c. These are collectively called iatrogenic lipoproteinemia phenotype. There have been demonstrations of alterations in common genetic loci between families of both combined familial hyperlipidemia phenotype and iatrogenic lipoproteinemia phenotype. Such loci include genes of superoxide manganese dismutase, transport proteins of cholesteryl esters/lecithin, cholesterol acyl transferase and AI-CIII-AIV, as well as a great variety of studies relating polymorphisms in the promoter region of the LH gene (C-480T and C-514T polymorphisms) with lowering on plasma levels of HDL-c48,49.\n\n6. LPL gene - lipoprotein lipase, Apo CII and familial dyslipidemia type O or familial chylomicronemia. Any mutations on the LPL gene, which results in a partial deficiency of the enzyme, will cause an increase in TG concentration. This is the basis of familial chylomicronemia, familial dyslipidemia type I or familial hypertriglyceridemia50. These are monogenic diseases with autosomal recessive inheritance, consisting with pure hypertriglyceridemia, TG values of 300 to 800 mg/dl, cholesterol <240 mg/dl, increases in VLDL and CMs, and lowering of LDL-c and HDL-c. To date, some LPL variants have been characterized because of amino acids substitutions in different positions (D9N, N291S, substitutions of glycine for glutamine on codon 188 and serine for a termination signal on codon 447)51. The enzymatic activity of LPL is also lowerd by mutations on the ApoC2 gene, located on chromosome 19q and codifies an essential activator of LPL52.\n\nThis information justifies the use of genetic markers for early diagnosis and cardiovascular risk assessment, especially in children and adolescents, in order to adopt early nutritional or pharmacologic interventions with the aim to mitigate atherosclerotic artery disease.\n\n\nLipoprotein lipase\n\nThe LPL gene is located on the short arm of chromosome 8, on the region 21.3 (8p21.3). It is formed of 10 exons and 9 introns (Figure 1), and the gene codifies a protein of 475 amino acids53,54.\n\nThe authors confirm that this is an original image and has not been re-used or adapted from another source.\n\nLPL is a multifunctional glycoprotein enzyme that plays an important role on lipid metabolism. After being secreted, it adheres to the luminal surface of endothelial cells where it hydrolyzes TG in circulating lipoproteins. This constitutes the limiting step on lipoprotein elimination, such as CMs from exogenous sources, and those endogenous sources, like VLDL, in circulation55,56.\n\nIn this way, LPL affects serum levels of TG, generating lipoprotein remnants that are processed by hepatic lipase. Recently, it has been demonstrated that LPL serves as a ligand for the protein related to the LDLR and influences hepatic secretion and VLDL and LDL-c capture57. Additionally, LPL has been linked to the retention of LDL-c by the sub-endothelial matrix and arterial wall, increasing LDL and VLDL conversion into more atherogenic forms58. Genetic modifications can affect LPL activity, which results in changes in lipid metabolism. Examples are slow hydrolysis of CMs and VLDL-c, longer LDL-c half-life, and decreased production of HDL59,60.\n\nAround 100 mutations have been described on the LPL gene. The most frequent are Asp9sn, Gly188Glu and Asn291Ser. The mutations in the homozygous form are associated with hyperlipoproteinemia type I (familial chylomicronemia). Heterozygous mutations have a significant incidence in the general population (3–7%) and leads to up to a 50% decreased activity of LPL, causing an increase in TG and a decrease in HDL-c. All these lipid profile patterns increase the risk of CVD61.\n\n\nLPL gene polymorphisms\n\nGenetic studies have revealed around 100 mutations and polymorphisms in simple nucleotides on the LPL gene, some are protective, which others are deleterious:\n\n1. Ser447x (rs328) polymorphism is located in exon 9, where cytosine is substituted by guanine on position 1959. This polymorphism leads to the suppression of both final amino acids, serine and glycine on position 447 of the protein that codifies a LPL protein prematurely truncated, which has increased lipolytic activity and increased levels of post-heparin LPL activity in X447 carriers. This is associated with the variant Ser447X, with low levels of TG, small increases of HLD-c levels, and a moderate CVD risk reduction62.\n\n2. Pvull (rs285) polymorphism, located on intron 6, is located 1.57 kb from the Splicing Acceptor (SA) site. This polymorphism is the product of a change of cytosine for thymine. The region that contains the Pvull site is similar to the site of splicing, which interferes with the correct splicing of mRNA. However, the physiological role of this polymorphism is not completely clear yet, since it does not alter the serum concentration of lipids, nor the amino acid sequence, and a previous meta-analysis suggests that cardiovascular risk is not influenced by this polymorphism63.\n\n3. HindIII (rs320) polymorphism is one of the most common polymorphisms of LPL gene (see below).\n\n\nHindIII (rs320) polymorphism\n\nHindIII is a transition of intronic bases of thymine (T) to guanine (G) on position 495 of intron 8 of the LPL gene, which eliminates the restriction site for the HindIII enzyme (Figure 2 and Figure 3).\n\nThe authors confirm that this is an original image and has not been re-used or adapted from another source.\n\nThe authors confirm that this is an original image and has not been re-used or adapted from another source.\n\nHindIII is one the most frequent polymorphisms found in various studies, which show that the homozygous genotype T/T (H+ /H+ ) represents from 45.1 to 56.4% of Iranian and south Indian populations, respectively most frequent, followed by the heterozygous T/G with 35.8–36.6% and homozygous G/G (H-/H-), with 6.93–19%64,65. Similar results have been reported in Europe66,67 and Brazil68.\n\nThe allele H+ (presence of thymine “T” or restriction site of HindIII enzyme) results in a cut on the base pair sequence in two bands of 217pb and 139pb. This is associated with a decrease in the activity of LPL in comparison with the allele H- (presence of “G” or absence of the enzymatic restriction site or presence of HindIII polymorphism). With 137pb, in which there is no cut in the LPL gene intron 8 sequence, maintaining a unique sequence of 356pb (Figure 4)69, leading to both alterations in lipidic metabolism and cardiovascular risk profile modifications in these populations.\n\nThe authors confirm that this is an original image and has not been re-used or adapted from another source.\n\nSome studies have demonstrated that the common allele (T or H+) is associated with lower levels of HDL-c in contrast with the uncommon allele (G or H-)70,71. In addition, those individuals with H+/H+ genotype had a higher concentration of serum levels of TG when compared with homozygous genotype H-/H-66,67,70,72. Similarly, there have been reports of high serum levels of LDL-c71 and a higher global cardiovascular risk in patients who carry the common allele (T or H+), see Table 1. Some studies had reported a significant drop in the LPL activity among carriers of the uncommon G allele when compared with the more common allele T57.\n\nLPL expressed by macrophages and other cells contained in the vascular walls is involved in the early atherogenic process and is associated with increased atherosclerosis. Overexpression of LPL is also associated with insulin resistance and HTN by increased sodium retention, inflammation, vascular remodeling, sympathetic nervous system activation, oxidative stress and vasoconstriction73–75.\n\nOn the other hand, HTN (mostly systolic) has been shown to be associated with the polymorphism HindIII in the Mexican population in studies by Muñoz-Barrios et al.76. Similarly, the homozygous genotype for the common allele (H+) was associated with a higher risk of myocardial infarction in patients older than 90 years old in contrast with carriers of the uncommon allele (H-), associated with a lower prevalence of cardiovascular complications77. Clear associations were found between genotypes of LPL HindIII with HTN (H+/H+ with an OR: 2.13; 95% CI: 0.93-4.8)72 and smoking58. In a more recent study, it was established that the presence of homozygous genotype for the common allele (H+/H+) of the LPL gene is a risk factor for a first episode of myocardial infarction65. Conversely, studies by Imeni et al.78 in an Iranian population, showed no statistically significant associations between CAD and genotypic distributions of HindIII polymorphism.\n\nRecent studies have shown increased risk of stroke among those with LPL gene variations, particularly in the HindIII gene79. He et al. reported a lower risk of stroke among patients with HindIII polymorphisms with allele G (G vs T; OR=0.78, CI95%=0.70-0.87, p<0.001). This pattern was observed in patients with ischemic stroke (G vs T. OR=0.84, CI95%=074-0.95, p=0.005) and hemorrhagic stroke (G vs. T; OR=0.60, CI95%=0.48-0.74, p<0.001)80.\n\nFrom a neurologic point of view, there is scant data associating homozygous common genotype (H+/H+) with the development of Alzheimer’s disease of late appearance. This is founded on the LPL function in regulation cognitive function, mediated by cholesterol and Vitamin E transport to neuronal cells on the hippocampus and other brain areas64. These investigations appear to indicate that the HindIII polymorphism might exert a positive influence in human metabolism, which translates into improved cardiac and cerebrovascular function.\n\n\nConclusions\n\nDyslipidemias are independent risk factors for atherosclerotic artery disease. High TC, TAG and LDL-C, as well as decreased serum HDL-C, are frequently associated with low physical activity and poor eating habits, but there is a large number of mutations and single nucleotide polymorphism related to a specific protein dysfunction within major lipoprotein metabolism pathways like CETP, ApoA, LCAT, LDL receptor, Apo B-100 and LPL.\n\nIn this regard, the LPL gene HindIII polymorphism (rare allele H-) poses a protective function through its role in producing an improved lipid profile (low TG and LDL-c and high HDL-c). On the other hand, the presence of common allele (T or H+) is associated with pro-atherogenic dyslipidemias and raised cardiovascular risk. The uncommon allele (G or H-) with an absence of restriction HindIII enzyme exhibits a lower prevalence of at least 20% according to the current available literature.\n\nThere are no studies in Venezuela that allows us to know the true prevalence of the HindIII polymorphism, nor to corroborate the association with changes in the lipid profile or an increased risk for cardiovascular diseases, so we suggest performing a national populational genetic study in search for this lipidic disorders with the aim to has a better understanding of the cardiovascular risk factors in Latin America.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Technological, Humanistic, and Scientific Development Council (Consejo de Desarrollo Científico, Humanístico y Tecnológico; CONDES), University of Zulia (grant nº CC-0437-10-21-09-10).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nHelkin A, Stein JJ, Lin S, et al.: Dyslipidemia Part 1--Review of Lipid Metabolism and Vascular Cell Physiology. Vasc Endovascular Surg. 2016; 50(2): 107–18. PubMed Abstract | Publisher Full Text\n\nMurray CJ, Lopez AD: Mortality by cause for eight regions of the world: Global Burden of Disease Study. Lancet. 1997; 349(9061): 1269–76. PubMed Abstract | Publisher Full Text\n\nGordon DJ, Probstfield JL, Garrison RJ, et al.: High-density lipoprotein cholesterol and cardiovascular disease. Four prospective American studies. Circulation. 1989; 79(1): 8–15. PubMed Abstract | Publisher Full Text\n\nIsomaa B, Almgren P, Tuomi T, et al.: Cardiovascular morbidity and mortality associated with the metabolic syndrome. Diabetes Care. 2001; 24(4): 683–9. PubMed Abstract | Publisher Full Text\n\nCarr MC, Brunzell JD: Abdominal obesity and dyslipidemia in the metabolic syndrome: importance of type 2 diabetes and familial combined hyperlipidemia in coronary artery disease risk. J Clin Endocrinol Metab. 2004; 89(6): 2601–7. PubMed Abstract | Publisher Full Text\n\nOnat A, Hergenç G, Sari I, et al.: Dyslipidemic hypertension: distinctive features and cardiovascular risk in a prospective population-based study. Am J Hypertens. 2005; 18(3): 409–16. PubMed Abstract | Publisher Full Text\n\nBrown CD, Higgins M, Donato KA, et al.: Body mass index and the prevalence of hypertension and dyslipidemia. Obes Res. 2000; 8(9): 605–19. PubMed Abstract | Publisher Full Text\n\nJoffres M, Shields M, Tremblay MS, et al.: Dyslipidemia prevalence, treatment, control, and awareness in the Canadian Health Measures Survey. Can J Public Health. 2013; 104(3): e252–257. PubMed Abstract | Publisher Full Text\n\nİlhan Ç, Beytullah Y, Şemsettin Ş, et al.: Serum lipid and lipoprotein levels, dyslipidemia prevalence, and the factors that influence these parameters in a Turkish population living in the province of Tokat. Turk J Med Sci. 2010; 40(5): 771–82. Publisher Full Text\n\nTóth PP, Potter D, Ming EE: Prevalence of lipid abnormalities in the United States: the National Health and Nutrition Examination Survey 2003–2006. J Clin Lipidol. 2012; 6(4): 325–30. PubMed Abstract | Publisher Full Text\n\nLee MH, Kim HC, Ahn SV, et al.: Prevalence of Dyslipidemia among Korean Adults: Korea National Health and Nutrition Survey 1998–2005. Diabetes Metab J. 2012; 36(1): 43–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Souza LJ, Souto Filho JT, de Souza TF, et al.: Prevalence of dyslipidemia and risk factors in Campos dos Goytacazes, in the Brazilian state of Rio de Janeiro. Arq Bras Cardiol. 2003; 81(3): 249–64. PubMed Abstract | Publisher Full Text\n\nAguilar-Salinas CA, Gómez-Pérez FJ, Rull J, et al.: Prevalence of dyslipidemias in the Mexican National Health and Nutrition Survey 2006. Salud Publica Mex. 2010; 52 Suppl 1: S44–53. PubMed Abstract | Publisher Full Text\n\nChiqui RA, Bermúdez V, Añez R, et al.: Prevalencia de dislipidemia y factores asociados en la ciudad de Cuenca, Ecuador. Síndrome Cardiometabólico. 2014; 4(2): 31–41. Reference Source\n\nVinueza R, Boissonnet CP, Acevedo M, et al.: Dyslipidemia in seven Latin American cities: CARMELA study. Prev Med. 2010; 50(3): 106–11. PubMed Abstract | Publisher Full Text\n\nLinares S, Bermúdez V, Rojas J, et al.: Prevalencia de dislipidemias y factores psicobiológicos asociados en individuos adultos del municipio Maracaibo, Venezuela. Síndrome Cardiometabólico. [Internet]. 2015; 3(3): 63–75.\n\nBermúdez V, Salazar J, Rojas J, et al.: Prevalence, Lipid Abnormalities Combinations and Risk Factors Associated with Low HDL-C Levels in Maracaibo City, Venezuela. J J Commun Med. 2015; 1(2): 9. Reference Source\n\nMiller M: Dyslipidemia and cardiovascular risk: the importance of early prevention. QJM. 2009; 102(9): 657–667. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLusis AJ, Rotter JI, Sparkes RS, et al.: Molecular Genetics of Coronary Artery Disease: Candidate Genes and Processes in Atherosclerosis. Monographs in Human Genetics. Karger, 1992; 14: I–XVII. Publisher Full Text\n\nGenest JJ Jr, Martin-Munley SS, McNamara JR, et al.: Familial lipoprotein disorders in patients with premature coronary artery disease. Circulation. 1992; 85(6): 2025–2033. PubMed Abstract | Publisher Full Text\n\nAulchenko YS, Ripatti S, Lindqvist I, et al.: Loci influencing lipid levels and coronary heart disease risk in 16 European population cohorts. Nat Genet. 2009; 41(1): 47–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNordestgaard BG, Chapman MJ, Humphries SE, et al.: Familial hypercholesterolaemia is underdiagnosed and undertreated in the general population: guidance for clinicians to prevent coronary heart disease: consensus statement of the European Atherosclerosis Society. Eur Heart J. 2013; 34(45): 3478–90a. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoekholdt SM, Sacks FM, Jukema JW, et al.: Cholesteryl ester transfer protein TaqIB variant, high-density lipoprotein cholesterol levels, cardiovascular risk, and efficacy of pravastatin treatment: individual patient meta-analysis of 13,677 subjects. Circulation. 2005; 111(3): 278–87. PubMed Abstract | Publisher Full Text\n\nIkewaki K, Matsunaga A, Han H, et al.: A novel two nucleotide deletion in the apolipoprotein A-I gene, apoA-I Shinbashi, associated with high density lipoprotein deficiency, corneal opacities, planar xanthomas, and premature coronary artery disease. Atherosclerosis. 2004; 172(1): 39–45. PubMed Abstract | Publisher Full Text\n\nPaulweber B, Friedl W, Krempler F, et al.: Genetic variation in the apolipoprotein AI-CIII-AIV gene cluster and coronary heart disease. Atherosclerosis. 1988; 73(2–3): 125–133. PubMed Abstract | Publisher Full Text\n\nHolleboom AG, Daniil G, Fu X, et al.: Lipid oxidation in carriers of lecithin:cholesterol acyltransferase gene mutations. Arterioscler Thromb Vasc Biol. 2012; 32(12): 3066–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFunke H, Von Eckardstein A, Pritchard PH, et al.: Genetic and phenotypic heterogeneity in familial lecithin: cholesterol acyltransferase (LCAT) deficiency. Six newly identified defective alleles further contribute to the structural heterogeneity in this disease. J Clin Invest. 1993; 91(2): 677–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Grooth GJ, Klerkx AH, Stroes ES, et al.: A review of CETP and its relation to atherosclerosis. J Lipid Res. 2004; 45(11): 1967–74. PubMed Abstract | Publisher Full Text\n\nSaeedi R, Li M, Frohlich J: A review on lecithin:cholesterol acyltransferase deficiency. Clin Biochem. 2015; 48(7–8): 472–5. PubMed Abstract | Publisher Full Text\n\nLevinson SS, Wagner SG: Implications of reverse cholesterol transport: recent studies. Clin Chim Acta. 2015; 439(Supplement C): 154–61. PubMed Abstract | Publisher Full Text\n\nBrown MS, Goldstein JL: A receptor-mediated pathway for cholesterol homeostasis. Science. 1986; 232(4746): 34–47. PubMed Abstract | Publisher Full Text\n\nHobbs HH, Brown MS, Goldstein JL: Molecular genetics of the LDL receptor gene in familial hypercholesterolemia. Hum Mutat. 1992; 1(6): 445–66. PubMed Abstract | Publisher Full Text\n\nSoria LF, Ludwig EH, Clarke HR, et al.: Association between a specific apolipoprotein B mutation and familial defective apolipoprotein B-100. Proc Natl Acad Sci U S A. 1989; 86(2): 587–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFouchier SW, Defesche JC, Kastelein JJ, et al.: Familial defective apolipoprotein B versus familial hypercholesterolemia: an assessment of risk. Semin Vasc Med. 2004; 4(3): 259–64. PubMed Abstract | Publisher Full Text\n\nGaffney D, Reid JM, Cameron LM, et al.: Independent mutations at codon 3500 of the apolipoprotein B gene are associated with hyperlipidemia. Arterioscler Thromb Vasc Biol. 1995; 15(8): 1025–9. PubMed Abstract | Publisher Full Text\n\nMahley RW, Innerarity TL, Rall SC Jr, et al.: Plasma lipoproteins: apolipoprotein structure and function. J Lipid Res. 1984; 25(12): 1277–1294. PubMed Abstract\n\nMahley RW, Huang Y, Rall SC Jr: Pathogenesis of type III hyperlipoproteinemia (dysbetalipoproteinemia): questions, quandaries, and paradoxes. J Lipid Res. 1999; 40(11): 1933–1949. PubMed Abstract\n\nKypreos KE, Li X, van Dijk KW, et al.: Molecular mechanisms of type III hyperlipoproteinemia: The contribution of the carboxy-terminal domain of ApoE can account for the dyslipidemia that is associated with the E2/E2 phenotype. Biochemistry. 2003; 42(33): 9841–9853. PubMed Abstract | Publisher Full Text\n\nZannis VI, Breslow JL, Utermann G, et al.: Proposed nomenclature of apoE isoproteins, apoE genotypes, and phenotypes. J Lipid Res. 1982; 23(6): 911–914. PubMed Abstract\n\nHixson JE, Vernier DT: Restriction isotyping of human apolipoprotein E by gene amplification and cleavage with HhaI. J Lipid Res. 1990; 31(3): 545–548. PubMed Abstract\n\nArráiz N, Bermúdez V, Prieto C, et al.: Association between apoliprotein E gene polymorphism and hypercholesterolemic phenotype in Maracaibo, Zulia state, Venezuela. Am J Ther. 2010; 17(3): 330–336. PubMed Abstract | Publisher Full Text\n\nEichner JE, Dunn ST, Perveen G, et al.: Apolipoprotein E polymorphism and cardiovascular disease: a HuGE review. Am J Epidemiol. 2002; 155(6): 487–495. PubMed Abstract | Publisher Full Text\n\nSchmidt K, Noureen A, Kronenberg F, et al.: Structure, function, and genetics of lipoprotein (a). J Lipid Res. 2016; 57(8): 1339–1359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBucci M, Tana C, Giamberardino MA, et al.: Lp (a) and cardiovascular risk: Investigating the hidden side of the moon. Nutr Metab Cardiovasc Dis. 2016; 26(11): 980–986. PubMed Abstract | Publisher Full Text\n\nBermúdez V, Arraiz N, Cano C, et al.: Lipoprotein (a): molecular and epidemiologic basis about its role in cardiovascular diseases. Revista Latinoamericana de Hipertensión. 2008; 3(4): 113–122. Reference Source\n\nBermúdez V, Arraiz N, Rojas E, et al.: Abnormally high lipoprotein (a) levels in african-american communities from venezuela faced to other african-descending populations: are ethnic origins related? Revista Latinoamericana de Hipertensión. 2008; 3(3): 66–72. Reference Source\n\nParson W, Kraft HG, Niederstätter H, et al.: A common nonsense mutation in the repetitive Kringle IV-2 domain of human apolipoprotein (a) results in a truncated protein and low plasma Lp (a). Hum Mutat. 2004; 24(6): 474–480. PubMed Abstract | Publisher Full Text\n\nVerma P, Verma DK, Sethi R, et al.: The rs2070895 (-250G/A) Single Nucleotide Polymorphism in Hepatic Lipase (HL) Gene and the Risk of Coronary Artery Disease in North Indian Population: A Case-Control Study. J Clin Diagn Res. 2016; 10(8): GC01–06. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEller P, Schgoer W, Mueller T, et al.: Hepatic lipase polymorphism and increased risk of peripheral arterial disease. J Intern Med. 2005; 258(4): 344–8. PubMed Abstract | Publisher Full Text\n\nStroes E, Moulin P, Parhofer KG, et al.: Diagnostic algorithm for familial chylomicronemia syndrome. Atheroscler Suppl. 2017; 23(Supplement C): 1–7. PubMed Abstract | Publisher Full Text\n\nClee SM, Loubser O, Collins J, et al.: The LPL S447X cSNP is associated with decreased blood pressure and plasma triglycerides, and reduced risk of coronary artery disease. Clin Genet. 2001; 60(4): 293–300. PubMed Abstract | Publisher Full Text\n\nBaggio G, Manzato E, Gabelli C, et al.: Apolipoprotein C-II deficiency syndrome. Clinical features, lipoprotein characterization, lipase activity, and correction of hypertriglyceridemia after apolipoprotein C-II administration in two affected patients. J Clin Invest. 1986; 77(2): 520–527. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOka K, Tkalcevic GT, Nakano T, et al.: Structure and polymorphic map of human lipoprotein lipase gene. Biochim Biophys Acta. 1990; 1049(1): 21–6. PubMed Abstract | Publisher Full Text\n\nDeeb SS, Peng RL: Structure of the human lipoprotein lipase gene. Biochemistry. 1989; 28(10): 4131–5. PubMed Abstract | Publisher Full Text\n\nEckel RH: Lipoprotein lipase. A multifunctional enzyme relevant to common metabolic diseases. N Engl J Med. 1989; 320(16): 1060–8. PubMed Abstract | Publisher Full Text\n\nWang H, Eckel RH: Lipoprotein lipase: from gene to obesity. Am J Physiol Endocrinol Metab. 2009; 297(2): E271–288. PubMed Abstract | Publisher Full Text\n\nFernández-Borja M, Bellido D, Vilella E, et al.: Lipoprotein lipase-mediated uptake of lipoprotein in human fibroblasts: evidence for an LDL receptor-independent internalization pathway. J Lipid Res. 1996; 37(3): 464–81. PubMed Abstract\n\nDaoud MS, Ataya FS, Fouad D, et al.: Associations of three lipoprotein lipase gene polymorphisms, lipid profiles and coronary artery disease. Biomed Rep. 2013; 1(4): 573–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGerdes C, Gerdes LU, Hansen PS, et al.: Polymorphisms in the lipoprotein lipase gene and their associations with plasma lipid concentrations in 40-year-old Danish men. Circulation. 1995; 92(7): 1765–9. PubMed Abstract | Publisher Full Text\n\nHeizmann C, Kirchgessner T, Kwiterovich PO, et al.: DNA polymorphism haplotypes of the human lipoprotein lipase gene: possible association with high density lipoprotein levels. Hum Genet. 1991; 86(6): 578–84. PubMed Abstract | Publisher Full Text\n\nPetrescu-Dănilă E, Voicu PM, Ionescu CR: [Mutagenic aspects of the lipoprotein lipase gene]. Rev Med Chir Soc Med Nat Iasi. 2006; 110(1): 173–7. PubMed Abstract\n\nGroenemeijer BE, Hallman MD, Reymer PW, et al.: Genetic variant showing a positive interaction with beta-blocking agents with a beneficial influence on lipoprotein lipase activity, HDL cholesterol, and triglyceride levels in coronary artery disease patients. The Ser447-stop substitution in the lipoprotein lipase gene. REGRESS Study Group. Circulation. 1997; 95(12): 2628–2635. PubMed Abstract | Publisher Full Text\n\nCagatay P, Susleyici-Duman B, Ciftci C: Lipoprotein lipase gene PvuII polymorphism serum lipids and risk for coronary artery disease: meta-analysis. Dis Markers. 2007; 23(3): 161–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSayad A, Noruzinia M, Zamani M, et al.: Lipoprotein Lipase HindIII Intronic Polymorphism in a Subset of Iranian Patients with Late-Onset Alzheimer's Disease. Cell J. 2012; 14(1): 67–72. PubMed Abstract | Free Full Text\n\nTanguturi PR, Pullareddy B, Krishna BR, et al.: Lipoprotein lipase gene HindIII polymorphism and risk of myocardial infarction in South Indian population. Indian Heart J. 2013; 65(6): 653–657. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeorges JL, Régis-Bailly A, Salah D, et al.: Family study of lipoprotein lipase gene polymorphisms and plasma triglyceride levels. Genet Epidemiol. 1996; 13(2): 179–92. PubMed Abstract | Publisher Full Text\n\nMattu RK, Needham EW, Morgan R, et al.: DNA variants at the LPL gene locus associate with angiographically defined severity of atherosclerosis and serum lipoprotein levels in a Welsh population. Arterioscler Thromb. 1994; 14(7): 1090–7. PubMed Abstract | Publisher Full Text\n\nRios DL, Vargas AF, Ewald GM, et al.: Common variants in the lipoprotein lipase gene in Brazil: association with lipids and angiographically assessed coronary atherosclerosis. Clin Chem Lab Med. 2003; 41(10): 1351–6. PubMed Abstract | Publisher Full Text\n\nLarson I, Hoffmann MM, Ordovas JM, et al.: The lipoprotein lipase HindIII polymorphism: association with total cholesterol and LDL-cholesterol, but not with HDL and triglycerides in 342 females. Clin Chem. 1999; 45(7): 963–8. PubMed Abstract\n\nRazzaghi H, Aston CE, Hamman RF, et al.: Genetic screening of the lipoprotein lipase gene for mutations associated with high triglyceride/low HDL-cholesterol levels. Hum Genet. 2000; 107(3): 257–67. PubMed Abstract | Publisher Full Text\n\nHolmer SR, Hengstenberg C, Mayer B, et al.: Lipoprotein lipase gene polymorphism, cholesterol subfractions and myocardial infarction in large samples of the general population. Cardiovasc Res. 2000; 47(4): 806–12. PubMed Abstract | Publisher Full Text\n\nHemimi N, Salam ME, Abd-Elwahab M: The Lipoprotein Lipase HindIII Polymorphism And The Susceptibility To Hypertension. Egypt J Biochem Mol Biol. 2009; 27(1). Publisher Full Text\n\nGoodarzi MO, Guo X, Taylor KD, et al.: Lipoprotein lipase is a gene for insulin resistance in Mexican Americans. Diabetes. 2004; 53(1): 214–20. PubMed Abstract | Publisher Full Text\n\nMead JR, Cryer A, Ramji DP: Lipoprotein lipase, a key role in atherosclerosis? FEBS Lett. 1999; 462(1–2): 1–6. PubMed Abstract | Publisher Full Text\n\nMcFarlane SI, Banerji M, Sowers JR: Insulin resistance and cardiovascular disease. J Clin Endocrinol Metab. 2001; 86(2): 713–8. PubMed Abstract | Publisher Full Text\n\nMuñoz-Barrios S, Guzmán-Guzmán IP, Muñoz-Valle JF, et al.: Association of the HindIII and S447X polymorphisms in LPL gene with hypertension and type 2 diabetes in Mexican families. Dis Markers. 2012; 33(6): 313–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMalygina NA, Melent'ev AS, Kostomarova IV, et al.: [Connection of HindIII-polymorphism in the lipoprotein lipase gene with myocardial infarct and life span in elderly ischemic heart disease patients]. Mol Biol (Mosk). 2001; 35(5): 787–91. PubMed Abstract\n\nImeni M, Hasanzad M, Naji T, et al.: Analysis of the association Hind III Polymorphism of Lipoprotein Lipase gene on the risk of coronary artery disease. Res Mol Med. 2013; 1(3): 19–24. Publisher Full Text\n\nShimo-Nakanishi Y, Urabe T, Hattori N, et al.: Polymorphism of the lipoprotein lipase gene and risk of atherothrombotic cerebral infarction in the Japanese. Stroke. 2001; 32(7): 1481–6. PubMed Abstract | Publisher Full Text\n\nHe T, Wang J, Deng WS, et al.: Association between Lipoprotein Lipase Polymorphism and the Risk of Stroke: A Meta-analysis. J Stroke Cerebrovasc Dis. 2017; 26(11): 2570–2578. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "30011",
"date": "19 Jan 2018",
"name": "David J. Galton",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review deals with the following genes involved in lipid metabolism: CETP, LCAT, LDL receptor, apoE, Lp(a), hepatic lipase and lipoprotein lipase. However it misses out details of the apoC3 gene which is the only one in which a specific therapy (volanesorsen) has been developed. A review of this can be found in Galton (2017)1.\nThe authors then go on to deal with the Hind 111 polymorphism of Lipoprotein lipase. A common HindIII polymorphism in intron 8 (T/G) of the LPL gene has been found to be associated with altered plasma TG and HDL-cholesterol, and CAD risk in several studies, but they do not comment on its functional significance.\nIt is known that certain intronic sequence contain regulatory elements that are important for transcription and translational regulation of a gene. A recent study (Chen et al. (2008)2) showed that this Hind 111 polymorphism affects the binding site of a transcription factor that regulates the transcription of LPL gene. Electrophoretic mobility shift assays revealed that the HindIII site binds to a transcription factor and that the mutant allele has lower binding affinity than the wild type allele. Transcription assays containing the entire intron 8 sequence along with full-length human LPL promoter were carried out in COS-1 and human vascular smooth muscle cells. The mutant allele was associated with significantly decreased luciferase expression level compared to the wild type allele in both the muscle (3.394 ± 0.022 vs. 4.184 ± 0.028; P=4.7 × 10−6) and COS-1 (11.603 ± 0.409 vs. 14.373 ± 1.096; P<0.0001) cells. This study demonstrates for the first time that the polymorphic HindIII site in the LPL gene is functional because it affects the binding of a transcription factor and it also has an impact on LPL expression.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
},
{
"id": "31173",
"date": "05 Mar 2018",
"name": "Carlos Aguilar-Salinas",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors of the paper \"Dyslipidemia: Genetics, lipoprotein lipase and HindIII polymorphism” summarizes several papers published by Latin American researchers about lipid disorders. The document highlight the need for more studies about genetics of dyslipidemia in this region.\n\nThe main limitations of the study are:\nThe paper covers a large number of topics. As a result, information is presented without a critical analyses. Since the HindIII polymorphism is the main issue under review, a large proportion of the review could be summarized and the HindIII polymorphism data extended. For example, the genes involved in familial hypoalphalipoproteinemia are enlisted partially (i.e. ABCA1 was not mentioned). It is not clear the reason to devote several paragrapjs for FH genes when they are not related with the main topic of this review.\n\nReasons for the large differences in the prevalence of the lipid disorders between Latin-American surveys could be critically discussed.\n\nThe style of the manuscript could be upgraded. The flow of the information should be improved.\n\nIt is kindly suggested to consider a redesign of the structure of the document.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nIs the review written in accessible language? Partly\n\nAre the conclusions drawn appropriate in the context of the current research literature? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-2073
|
https://f1000research.com/articles/7-596/v2
|
10 Jul 18
|
{
"type": "Research Article",
"title": "Diurnal variation in the proinflammatory activity of urban fine particulate matter (PM2.5) by in vitro assays",
"authors": [
"Christopher Lovett",
"Mafalda Cacciottolo",
"Farimah Shirmohammadi",
"Amin Haghani",
"Todd E. Morgan",
"Constantinos Sioutas",
"Caleb E. Finch",
"Mafalda Cacciottolo",
"Farimah Shirmohammadi",
"Amin Haghani",
"Todd E. Morgan",
"Constantinos Sioutas",
"Caleb E. Finch"
],
"abstract": "Background: Ambient particulate matter (PM) smaller than 2.5 µm in diameter (PM2.5) undergoes diurnal changes in chemical composition due to photochemical oxidation. In this study we examine the relationships between oxidative activity and inflammatory responses associated with these diurnal chemical changes. Because secondary PM contains a higher fraction of oxidized PM species, we hypothesized that PM2.5 collected during afternoon hours would induce a greater inflammatory response than primary, morning PM2.5. Methods: Time-integrated aqueous slurry samples of ambient PM2.5 were collected using a direct aerosol-into-liquid collection system during defined morning and afternoon time periods. PM2.5 samples were collected for 5 weeks in the late summer (August-September) of 2016 at a central Los Angeles site. Morning samples, largely consisting of fresh primary traffic emissions (primary PM), were collected from 6-9am (am-PM2.5), and afternoon samples were collected from 12-4pm (pm-PM2.5), when PM composition is dominated by products of photochemical oxidation (secondary PM). The two diurnally phased PM2.5 slurries (am- and pm-PM2.5) were characterized for chemical composition and BV-2 microglia were assayed in vitro for oxidative and inflammatory gene responses. Results: Contrary to expectations, the am-PM2.5 slurry had more proinflammatory activity than the pm-PM2.5 slurry as revealed by nitric oxide (NO) induction, as well as the upregulation of proinflammatory cytokines IL-1β, IL-6, and CCL2 (MCP-1), as assessed by messenger RNA production. Conclusions: The diurnal differences observed in this study may be in part attributed to the greater content of transition metals and water-insoluble organic carbon (WIOC) of am-PM2.5 (primary PM) vs. pm-PM2.5 (secondary PM), as these two classes of compounds can increase PM2.5 toxicity.",
"keywords": [
"Photochemistry",
"Los Angeles",
"PM2.5",
"Oxidative stress",
"Traffic",
"Primary PM",
"Secondary PM",
"Neuroinflammation"
],
"content": "Introduction\n\nParticulate matter (PM) with an aerodynamic diameter less than 2.5 µm (fine PM or PM2.5), is associated with diverse health problems and chronic diseases, including asthma, chronic obstructive pulmonary disease (COPD), lung cancer, and coronary heart disease (Delfino et al., 2005; Delfino et al., 2011; Dockery et al., 1993; Dominici et al., 2006; Kaufman et al., 2016; Kim et al., 2013; Landrigan et al., 2018; Shah et al., 2013). Findings of recent epidemiological studies extend chronic PM2.5 exposure risk to Alzheimer’s disease and accelerated cognitive decline (Cacciottolo et al., 2017; Chen et al., 2015; Chen et al., 2017). Corresponding rodent models show robust indicators of inflammatory and oxidative stress to PM2.5 fractions in pathological responses of aorta (Li et al., 2003), brain (Cheng et al., 2016b; Levesque et al., 2011; MohanKumar et al., 2008; Morgan et al., 2011), and lung (Zhang et al., 2012).\n\nIn addition to the epidemiological associations with chronic disease, we must also consider diurnal variations in airborne particulate matter chemistry that are not included in most long-term epidemiological studies. Diurnal variation in air pollution toxicity is suggested by diurnal variations in emergency department admissions for dementia (Linares et al., 2017), ischemic stroke (Han et al., 2016), and respiratory conditions (Darrow et al., 2011). Although these admissions were more strongly associated with ozone than with PM2.5 in all three of these studies, diurnal changes in PM2.5 chemistry must also be considered as an influencing factor. Freshly emitted primary PM undergoes photochemical oxidation reactions over the course of the day, catalyzed by ultraviolet (UV) sunlight, which results in diverse oxidized organic and inorganic products (secondary PM) (Forstner et al., 1997; Grosjean & Seinfeld, 1989), along with concomitant changes in PM toxicity. These diurnal changes in PM2.5 composition and associated toxicity are relevant to and may inform future long-term epidemiological studies of primary and secondary particulate matter. While prior studies in the Los Angeles air basin have shown extensive diurnal variations in PM composition and size, the findings of PM oxidative activity have been inconsistent and differ between various assays of oxidative potential (Saffari et al., 2015; Verma et al., 2009; Wang et al., 2013b).\n\nThe current study further examined diurnal variations in composition and oxidative potential of PM samples collected at the central Los Angeles site used in the three studies mentioned above. However, unlike these earlier studies, PM samples were collected by a direct aerosol-into-liquid collection method to provide time-integrated aqueous PM2.5 slurries for both morning and afternoon periods. This technology allows for a more comprehensive analysis than the filterable (i.e. water extracted) particulate samples examined in our prior studies (Morgan et al., 2011; Saffari et al., 2015; Verma et al., 2009; Woodward et al., 2017a).\n\nMicroglia were used for in vitro assays of oxidative and inflammatory responses to PM2.5 exposures because of their increasingly recognized role in environmental neurotoxicology (Krafft, 2015). Air pollution can induce premature microglial activation, as documented in rodent models (Cheng et al., 2016a; Hanamsagar & Bilbo, 2017; Morgan et al., 2011) and as indicated for young adults living in the highly polluted Mexico City (Calderón-Garcidueñas et al., 2008; Calderón-Garcidueñas et al., 2018). Microglia (BV-2) cell cultures were assayed for induction of nitric oxide (NO) and for proinflammatory gene mRNA responses of interleukins 6 and 1β (IL-6 & IL-1β), and monocyte chemoattractant protein 1 (MCP-1), also known as chemokine (C-C motif) ligand 2 (CCL2). These markers were chosen because of their in vivo and in vitro responses to ultrafine PM shown in prior studies (Cheng et al., 2016b; Morgan et al., 2011; Woodward et al., 2017b).\n\nWe hypothesized that afternoon PM2.5 (pm-PM2.5), with its high proportion of secondary photochemical oxidation products, would have greater oxidative and proinflammatory activity than freshly emitted, primary PM collected during morning hours (am-PM2.5).\n\n\nMethods\n\nAll sampling was done at the University of Southern California Particle Instrumentation Unit (PIU), located approximately 150 meters downwind (east) of the Los Angeles I-110 freeway (34°1’9” N, 118°16’38” W). PM2.5 samples were collected weekdays during the morning rush hour period of 6am–9am, as well as during the afternoon hours of 12pm–4pm, when photochemical products of primary PM oxidation are dominant in the atmosphere. The 5-week sampling campaign was conducted during late summer (August and September) of 2016, ensuring maximum UV sunlight exposure to enhance photochemical oxidation reactions.\n\nParticle collection employed a novel high-volume aerosol-into-liquid collector developed and built at USC’s Sioutas Aerosol Laboratory, which provides concentrated slurries of fine and/or ultrafine PM (Wang et al., 2013a). A 2.5 µm cut-point slit impactor at the inlet to the online sampling system removed PM larger than 2.5 µm in diameter and ensured that only PM2.5 was captured in the aerosol-into-liquid collector. This sampler operates at 200 liters per minute (lpm) flow; two inlet aerosol streams, each at 100 lpm flow, are merged and passed through a steam bath where ultrapure water vapor condenses on the surfaces of airborne particles, growing the droplets to 2–3 μm in diameter. Downstream of the hot water bath, particles enter an electronic chiller, where they are cooled and condensed, passing through an impactor and accumulating in the aerosol-into-liquid collector as an aqueous PM2.5 slurry. For each sampling condition, morning and afternoon, one time-integrated slurry sample was collected for chemical speciation and biological assays.\n\nTo determine mass loadings of the PM2.5 slurry samples, 47 mm Zefluor filters (Pall Life Sciences, Ann Arbor, MI, USA) were used to capture PM2.5 passing through a parallel airstream at a flow rate of 9 lpm. Mass of the PM2.5 filter samples was determined gravimetrically by pre- and post-weighing the Zefluor filters, equilibrated at controlled temperature (22–24 °C) and relative humidity (of 40–50%) conditions. Slurry PM concentrations were calculated from the filter mass loadings and air volume sampled per time period.\n\nAqueous PM2.5 slurry samples were analyzed for metals and trace elements, total carbon (TC), and inorganic ions. Analyses were performed in triplicate on one aliquot of each slurry, morning (am-PM2.5) and afternoon (pm-PM2.5). Total metals and trace elements were quantified using magnetic-sectored Inductively Coupled Plasma Mass Spectroscopy (SF-ICPMS) following acid extraction, while analysis of the samples for inorganic anions was achieved by ion chromatography (IC) (Zhang et al., 2008). Total carbon was determined using a Sievers 900 Total Carbon Analyzer (Sullivan et al., 2004). Uncertainty values for all analyses are reported in the results as analytical error. Each uncertainty value is calculated as the square root of the sum of squares of the instrument and blank uncertainty components (S.D. of triplicate analyses, S.D. of triplicate blank measurements).\n\nBV-2 Cell Culture. PM2.5 slurry samples were assayed with immortalized BV-2 microglia (RRID: CVCL_0182) (Eun et al., 2017; Gresa-Arribas et al., 2012). BV-2 cells were cultured in Dulbecco’s Modified Eagle’s Medium/Ham’s F12 50/50 Mix (DMEM F12 50/50; # 11320033, Life Technologies, Carlsbad, CA) supplemented with 10% fetal bovine serum (FBS; #45000–734, VWR, Radnor, PA), 1% penicillin/streptomycin (#P4333–100ML, Sigma-Aldrich, St. Louis, MO), and 1% L-glutamine (Glutamax; #35050061, Life Technologies, Carlsbad, CA) in a humidified incubator (37 °C/5% CO2). For cell treatments, PM2.5 slurries were diluted in the same isotonic and pH-balanced culture media and applied to cells for up to 24 hours. Cell culture experiments were done in triplicate per endpoint.\n\nNitrite Assay. Nitric oxide (NO) was assayed in BV-2 cell media by the Griess reagent (Cheng et al., 2016b; Ignarro et al., 1993). BV-2 cells at 60–70% confluence in 96-well plates (2 × 106 cells/plate) were treated with both am-PM2.5 and pm-PM2.5 at doses of 1, 5 and 20 µg/mL, 200 µL/well. At 30-minute, 60-minute and 24-hour timepoints, duplicate 50 µL aliquots of cell media were removed from each treatment well and transferred to a new 96-well plate. Within this same 96-well plate, a series of nitrite standards (50 µL/well) ranging from 0.10 to 10 µM prepared from a NaNO2 stock solution were added, thus allowing a standardization curve to be generated for use in determining the NO concentration in each treatment well from measured absorbance data. After transferring all aliquots, 50 µL of Griess reagent was added to each well and the plate was allowed to incubate at room temperature (21–23 °C) for 10 minutes, followed by spectrophotometric analysis at 548 nm absorbance using a SpectraMax M2 microplate reader (Molecular Devices, San Jose, CA, USA). The nitrite assay was performed in triplicate, with six data points collected at each PM2.5 concentration per condition.\n\nQuantitative Polymerase Chain Reaction (qPCR). The quantitative polymerase chain reaction (qPCR) assay was used to quantify upregulation of cytokines and chemokines associated with the microglial neuroinflammatory response, including IL-6, CCL2 (MCP-1), and IL-1β. BV-2 microglia were seeded in 6-well plates at 106 cells/well and grown overnight at 37 °C/5% CO2, followed by treatment with aqueous am-PM2.5 and pm-PM2.5 slurries diluted to 10 μg/ml in isotonic and pH-balanced cell culture media. A control condition, consisting of pure media diluted with ultrapure water, was also prepared. After 24 hours of incubation, treated cells were trypsinized and harvested for RNA extraction. Total cell RNA was extracted using the TRIzol reagent (Invitrogen, Carlsbad, CA), and cDNA was prepared from 1 μg of RNA (RT Master Mix, BioPioneer, San Diego, CA). Specific primers for each gene were used in conjunction with the qPCR Master Mix (BioPioneer) to run real time qPCR reactions.\n\nGenes examined by qPCR included IL-1β (forward: 5’ CTAAAGTATGGGCTGGACTG 3’; reverse: 5’ GGCTCTCTTTGAACAGAATG 3’), IL-6 (forward: 5’ TGCCTTCTTGGGACTGATGCT 3’; reverse: 5’ GCATCCATCATTTCTTTGTAT 3’), MCP-1 (forward: 5’ CCCAATGAGTAGGCTGGAGA 3’; reverse: 5’ TCTGGACCCATTCCTTCTTG 3’), and GAPDH (forward: 5’ AGACAGCCGCATCTTCTTGT 3’; reverse: 5’ CTTGCCGTGGGTAGAGTCAT 3’) (Integrated DNA Technologies, Skokie, IL). Data were normalized to GAPDH and quantified as ΔΔCt. qPCR was repeated, with 12 data points collected per treatment (am-PM2.5 and pm-PM2.5; 10 µg/mL).\n\nStatistical analysis. Results were evaluated by 2-way repeated measures ANOVA statistical analysis and Bonferroni post hoc tests using GraphPad Prism (v. 6.04) statistical software.\n\n\nResults\n\nA dose-dependent NO response to PM2.5 treatments relative to control was observed at all timepoints (30 min., 60 min., 24 hr.), which was greater for am-PM2.5 than pm-PM2.5 exposures (Figure 1). am-PM2.5 samples induced consistently higher levels of NO for all concentrations and post-exposure timepoints, with a peak effect, 7-fold greater than control (p = 0.0077), observed at 60 minutes in response to the highest am-PM2.5 dose of 20 µg/mL (Figure 1A). At 30 minutes post-treatment, there was also a significant 5.3-fold increase of am-PM2.5 relative to control (p = 0.0020), and a significant difference between the responses to am-PM2.5 and pm-PM2.5, with am-PM2.5 eliciting a 3.1-fold greater NO response than pm-PM2.5 (p = 0.0094). There was also a significant a significant 2.9-fold increase of am-PM2.5 relative to control (p = 0.0007) at 24 hours post-treatment. The NO responses to pm-PM2.5 paralleled the effects of am-PM2.5 exposures, but were at least 50% smaller (Figure 1B): the 20 µg/mL pm-PM2.5 treatment induced 1.7-, 3.5-, and 2.0-fold increases in NO concentration relative to control at 30 min., 60 min. and 24 hrs., respectively, but these effects were not significant.\n\nBV-2 microglial responses to PM2.5 slurries in vitro, assayed in culture media by the Griess reaction (control = 1.0 µM nitrite). A. Morning samples (am-PM2.5); B. Afternoon samples (pm-PM2.5). am-PM2.5 samples induced consistently higher NO responses for all concentrations and post-exposure timepoints. At 30 minutes post-treatment, there was a significant effect of am-PM2.5, as well as a significant difference between the responses to am-PM2.5 and pm-PM2.5 (overall ANOVA: p = 0.0017; am-PM2.5 20 µg/mL vs. control: 5.3-fold increase, p = 0.0020; am-PM2.5 20 µg/mL vs. pm-PM2.5 20 µg/mL: 3.1-fold increase, p = 0.0094). There was also a significant effect of am PM2.5 at 60 minutes post-treatment (overall ANOVA: p = 0.010; am-PM2.5 20 µg/mL vs. control: 7.0-fold increase, p = 0.0077). At 24 hours a significant effect of am-PM2.5 treatment was also observed (overall ANOVA: p = 0.0005; am-PM2.5 20 µg/mL vs. control: 2.9-fold increase, p = 0.0007). Mean ± SE (n = 3 experiments). 2-way repeated measures ANOVA statistical analysis with Bonferroni post hoc tests: *p≤0.05, **p≤0.01, ***p≤0.001, ****p≤0.0001.\n\nBV-2 cells were treated with 10 μg/ml of am-PM2.5 and pm-PM2.5 and analyzed for mRNA responses by qPCR after 24 hours incubation. The 10 μg/ml dose was chosen as below threshold for metabolic impairment based on prior studies from our group (e.g. Cheng et al., 2016b; Morgan et al., 2011; Woodward et al., 2017b). Induction of all three cytokines was increased by both morning and afternoon PM2.5 samples, with more modest responses to pm-PM2.5 (Figure 2). As shown in Figure 2A, treatment with am-PM2.5 induced a significant 4.8-fold increase in IL-1β expression relative to control (p = 0.0070). Both am-PM2.5 and pm-PM2.5 induced significant increases in IL-6 mRNA production relative to control, with am-PM2.5 exposure resulting in a 5.1-fold increase (p < 0.0001) and pm-PM2.5 resulting in a 3.5-fold increase (p = 0.0046) (Figure 2B). Treatment with am-PM2.5 also induced a significant 2.0-fold increase in MCP-1 mRNA production (p = 0.0022), while pm-PM2.5 had a 33% smaller effect (Figure 2C). This difference in MCP-1 mRNA production induced by am-PM2.5 as compared to pm-PM2.5 was marginally significant (p = 0.0527).\n\nAfter exposing BV-2 cells to 10µg/mL of morning (am-PM2.5) and afternoon (pm-PM2.5) slurries, cellular mRNA production was assessed by qPCR. Relative to control, both am-PM2.5 and pm-PM2.5 exposures increased mRNA levels of A. Interleukin 1β (IL-1β), B. Interleukin 6 (IL-6), and C. monocyte chemoattractant protein 1 (MCP-1). Treatment with am-PM2.5 induced a significant 4.8-fold increase in IL-1β expression relative to control (overall ANOVA: p = 0.0090; am-PM2.5: 4.8-fold increase, p = 0.0070). Both am-PM2.5 and pm-PM2.5 induced significant increases in IL-6 mRNA production (overall ANOVA: p < 0.0001; am-PM2.5: 5.1-fold increase, p < 0.0001; pm-PM2.5: 3.5-fold increase, p = 0.0046). Treatment with am-PM2.5 also induced a significant increase in MCP-1 mRNA production, while pm-PM2.5 had an effect 33% smaller than am-PM2.5 (overall ANOVA: p = 0.0028; am-PM2.5: 2.0-fold increase, p = 0.0022; am-PM2.5 vs. pm-PM2.5: p = 0.0527). Mean ± SE (n = 12). 2-way repeated measures ANOVA statistical analysis with Bonferroni post hoc tests: *p≤0.05, **p≤0.01, ***p≤0.001, ****p≤0.0001.\n\nThe am-PM2.5 and pm-PM2.5 time-integrated aqueous slurry samples were analyzed for chemical composition, including total carbon (TC), inorganic ions, and total metals and trace elements, and are presented as PM2.5 mass fractions in Figures 3A, 3B, and 3C, respectively. PM2.5 TC content decreased by 40% from morning (0.50 μg/μg-PM) to afternoon (0.31 μg/μg-PM) (Figure 3A). Mass concentrations of inorganic secondary ions (NO3-, SO42-, NH4+, Na+) were approximately 5-fold higher in the afternoon as compared to morning slurries (Figure 3B). For the sixteen metals and trace elements analyzed, the am-PM2.5 slurry contained higher mass concentrations of several measured elements as compared to the pm-PM2.5 slurry (Figure 3C, note log scale; Table S1, Supplementary File 1). Arsenic, chromium, and manganese showed the largest diurnal decline, represented as am-PM2.5:pm-PM2.5 ratios: arsenic (11.6), chromium (7.9), and manganese (6.0).\n\nTime-integrated PM2.5 slurries collected during morning (6–9am) and afternoon (12–4pm) periods analyzed for A. Total Carbon (TC), B. Inorganic ions (ion chromatography), C. Total metals and trace elements (ICP-MS). Mean values presented are based on triplicate analysis of one sample aliquot. Error bars represent laboratory uncertainty values based on contributions of analytical error (standard deviation) and blank subtraction (standard deviation of at least three method blanks).\n\n\nDiscussion\n\nDiurnal variations in urban PM2.5 oxidative and proinflammatory activity showed consistent decreases from morning to afternoon sampling periods in two independent in vitro assays using the BV-2 microglia cell line. The collection of total PM2.5 as an aqueous slurry was enabled by direct aerosol-into-liquid sampling that more efficiently captures water-insoluble components of ambient PM2.5 than traditional filter-based sampling methods used in several prior studies (e.g. Saffari et al., 2015; Verma et al., 2009). These slurry samples are more representative of the full range of ambient PM components and their toxicities than filter-trapped and water eluted PM. Additionally, the results of the NO assay and the qPCR assay for inflammatory gene responses extend findings from the widely used dithiothreitol (DTT) and alveolar macrophage (dichlorodihydrofluorescein, DCFH) assays for oxidative potential, which can be confounded by oxidative recycling from transition metals (Forman & Finch, 2018). Our findings, that primary PM2.5 results in a greater oxidative and proinflammatory response than secondary PM2.5, are contrary to expectations based on prior reports that secondary, photo-oxidized PM exhibits greater oxidative activity than primary PM.\n\nPrevious studies of diurnal variations in PM composition and oxidative activity have not been consistent and were limited in using only simple assays of oxidative potential (i.e. DTT and DCFH) on filter-captured PM. Relying solely on oxidative potential measures such as the DTT and DCFH assays provides us with only an imprecise measure of cellular oxidative and proinflammatory activity that lacks specificity. The current study improves on the experimental design of past studies by utilizing direct measures of acute oxidative stress and inflammation, including free radical production induced by PM as nitric oxide (NO) and cellular proinflammatory mRNA responses. Additionally, by using the direct aerosol-into-liquid method to collect aqueous slurries in our study, water-insoluble PM species were more efficiently captured, providing samples more representative of the full range of ambient PM components and their toxicities.\n\nFurther insight into the sources of particulate toxicity may be gleaned by the apportionment of redox properties to its water soluble and insoluble chemical components, including water-soluble and water-insoluble organic carbon (WSOC and WIOC, respectively). WSOC species are generally defined as hydrophilic, while WIOC are hydrophobic (Turpin & Lim, 2001). Wang et al. (2013b) collected aqueous PM2.5 slurries by a similar aerosol-into-liquid sampling method, and found that increased WIOC content in PM2.5, relative to WSOC content, was highly correlated with redox activity on a per mass basis, indicating a greater intrinsic toxicity of WIOC as compared to WSOC. While this study was limited by its use of the DCFH assay, the greater oxidative potential associated with increased WIOC mass concentrations was attributed to organic compounds such as PAHs, as well as iron and other transition metals.\n\nOur results indicate that morning PM2.5, which contains a greater proportion of water-insoluble species, may be intrinsically more toxic and induce greater cellular oxidative stress, than afternoon PM2.5 samples that contain a larger mass fraction of oxidized, water-soluble species that are products of photochemical reactions in the atmosphere (Seinfeld & Pandis, 2016), including the inorganic secondary ions NO3-, SO42-, NH4+, and Na+. The mechanisms underlying the greater toxicity of primary, morning PM2.5 may involve non-polar WIOC components, such as PAHs, being able to more easily permeate the hydrophobic lipid-bilayer of cell membranes to trigger the formation of intracellular oxidative species and induce proinflammatory cytokine formation via an acute oxidative stress response.\n\nPrimary, traffic-derived PM2.5 also consists of greater concentrations of redox active and other toxic metals, as compared to the bulk of secondary PM2.5, which consists largely of hydrophilic products of photochemical oxidation. The metals and trace elements we found to be more prevalent in the morning slurry sample included the heavy metals vanadium, chromium, nickel, and arsenic, which are emitted by vehicles both as fuel combustion products as well as remnants of motor oil degradation (Geller et al., 2006), copper, which is associated with vehicular brake wear (Garg et al., 2000; Sanders et al., 2003; Sternbeck et al., 2002), and zinc, which is primarily a product of tire deterioration (Singh et al., 2002). Elevated levels of these metals in both collection periods correspond to vehicular emissions as the major source of primary particles in close proximity to the I-110 freeway. We believe the higher proportions of these metals and WIOC components in primary PM2.5 dominant in the morning hours, as compared to photo-oxidized secondary PM2.5 prevalent in the afternoon, are responsible for the diurnal variation in acute oxidative stress observed in the current study.\n\n\nSummary and conclusions\n\nThe data presented in this study demonstrate that urban PM2.5 collected during the morning rush hour (6–9am), when primary, traffic-derived PM emissions are dominant, induces greater oxidative and proinflammatory responses in cells as compared to PM2.5 collected in the afternoon (12–4pm), which contains a higher proportion of photo-oxidized, secondary PM products. Two in vitro assays of the cellular inflammatory response consistently demonstrated greater oxidative and proinflammatory activity due to primary (morning) PM2.5 exposure. We attribute this effect to the greater transition metal and water-insoluble organic carbon (WIOC) content of primary PM2.5, two classes of PM components that increase toxicity (Cho et al., 2005; Hu et al., 2008; Li et al., 2009; Shirmohammadi et al., 2015; Tao et al., 2003; Zhang et al., 2008). Our study also improves upon previous research of diurnal variations in PM-induced oxidative stress by utilizing a unique aerosol-into-liquid PM collection system that more efficiently captures water insoluble components, thus providing complete aqueous PM samples more representative of ambient PM.\n\nThis research will ultimately help us gain a more complete understanding of the complex nature of particulate matter and how its composition and proinflammatory effects change over time due to photochemical aging in the atmosphere. The Southern California climate of Los Angeles with abundant sunshine, compounded with dense vehicular traffic, generates ubiquitous primary and secondary PM throughout the year. Identifying the health effects of these pollutants is critical as we strive to understand the underlying mechanisms of PM-induced oxidative stress, neuroinflammation and associated morbidity. Our findings may help in further elucidating the role of PM in the etiology, onset and development of widespread, chronic diseases that plague urban populations, including cancer, cardiac and respiratory distress, and neurodegenerative disorders such as Alzheimer’s disease.\n\n\nData availability\n\nDataset 1: The following raw data sets are provided as comma separated values (.csv) files: 10.5256/f1000research.14836.d203329 (Lovett et al., 2018)\n\nPM_Diurnal_Variation_NO_Fig1_DATA\n\nPM_Diurnal_Variation_qPCR_Fig2_DATA\n\nPM_Diurnal_Variation_TC_Fig3A_DATA\n\nPM_Diurnal_Variation_Ions_Fig3B_DATA\n\nPM_Diurnal_Variation_Metals_Fig3C_DATA",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported in part by the University of Southern California Viterbi Dean’s Ph.D. Fellowship, and by NIH research grants RF1-AG051521-01 and R21-AG050201-01A1.\n\n\nSupplementary material\n\nTable S1. Average concentrations and uncertainty values of total carbon, inorganic ions, metals and trace elements in ambient PM2.5 slurry samples collected during morning and afternoon periods.\n\nClick here to access the data.\n\n\nReferences\n\nCacciottolo M, Wang X, Driscoll I, et al.: Particulate air pollutants, APOE alleles and their contributions to cognitive impairment in older women and to amyloidogenesis in experimental models. Transl Psychiatry. 2017; 7(1): e1022. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCalderón-Garcidueñas L, Gónzalez-Maciel A, Reynoso-Robles R, et al.: Hallmarks of Alzheimer disease are evolving relentlessly in Metropolitan Mexico City infants, children and young adults. APOE4 carriers have higher suicide risk and higher odds of reaching NFT stage V at ≤ 40 years of age. Environ Res. 2018; 164: 475–487. PubMed Abstract | Publisher Full Text\n\nCalderón-Garcidueñas L, Solt AC, Henríquez-Roldán C, et al.: Long-term air pollution exposure is associated with neuroinflammation, an altered innate immune response, disruption of the blood-brain barrier, ultrafine particulate deposition, and accumulation of amyloid beta-42 and alpha-synuclein in children and young adults. Toxicol Pathol. 2008; 36(2): 289–310. PubMed Abstract | Publisher Full Text\n\nChen H, Kwong JC, Copes R, et al.: Living near major roads and the incidence of dementia, Parkinson’s disease, and multiple sclerosis: a population-based cohort study. Lancet. 2017; 389(10070): 718–726. PubMed Abstract | Publisher Full Text\n\nChen JC, Wang X, Wellenius GA, et al.: Ambient air pollution and neurotoxicity on brain structure: Evidence from women’s health initiative memory study. Ann Neurol. 2015; 78(3): 466–476. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheng H, Davis DA, Hasheminassab S, et al.: Urban traffic-derived nanoparticulate matter reduces neurite outgrowth via TNFα in vitro. J Neuroinflammation. 2016a; 13(1): 19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheng H, Saffari A, Sioutas C, et al.: Nanoscale Particulate Matter from Urban Traffic Rapidly Induces Oxidative Stress and Inflammation in Olfactory Epithelium with Concomitant Effects on Brain. Environ Health Perspect. 2016b; 124(10): 1537–1546. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCho AK, Sioutas C, Miguel AH, et al.: Redox activity of airborne particulate matter at different sites in the Los Angeles Basin. Environ Res. 2005; 99(1): 40–47. PubMed Abstract | Publisher Full Text\n\nDarrow LA, Klein M, Sarnat JA, et al.: The use of alternative pollutant metrics in time-series studies of ambient air pollution and respiratory emergency department visits. J Expo Sci Environ Epidemiol. 2011; 21(1): 10–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDelfino RJ, Sioutas C, Malik S: Potential role of ultrafine particles in associations between airborne particle mass and cardiovascular health. Environ Health Perspect. 2005; 113(8): 934–46. PubMed Abstract | Free Full Text\n\nDelfino RJ, Staimer N, Vaziri ND: Air pollution and circulating biomarkers of oxidative stress. Air Qual Atmos Hlth. 2011; 4(1): 37–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDockery DW, Pope CA 3rd, Xu X, et al.: An association between air pollution and mortality in six U.S. cities. N Engl J Med. 1993; 329(24): 1753–1759. PubMed Abstract | Publisher Full Text\n\nDominici F, Peng RD, Bell ML, et al.: Fine particulate air pollution and hospital admission for cardiovascular and respiratory diseases. JAMA. 2006; 295(10): 1127–1134. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEun CS, Lim JS, Lee J, et al.: The protective effect of fermented Curcuma longa L. on memory dysfunction in oxidative stress-induced C6 gliomal cells, proinflammatory-activated BV2 microglial cells, and scopolamine-induced amnesia model in mice. BMC Complement Altern Med. 2017; 17(1): 367. PubMed Abstract | Publisher Full Text | Free Full Text\n\nForman HJ, Finch CE: A critical review of assays for hazardous components of air pollution. Free Radic Biol Med. 2018; 117: 202–217. PubMed Abstract | Publisher Full Text | Free Full Text\n\nForstner HJ, Flagan RC, Seinfeld JH: Secondary organic aerosol from the photooxidation of aromatic hydrocarbons: Molecular composition. Environ Sci Technol. 1997; 31(5): 1345–1358. Publisher Full Text\n\nGarg BD, Cadle SH, Mulawa PA, et al.: Brake wear particulate matter emissions. Environ Sci Technol. 2000; 34(21): 4463–4469. Publisher Full Text\n\nGeller M, Biswas S, Sioutas C: Determination of particle effective density in urban environments with a differential mobility analyzer and aerosol particle mass analyzer. Aerosol Sci Tech. 2006; 40(9): 709–723. Publisher Full Text\n\nGresa-Arribas N, Viéitez C, Dentesano G, et al.: Modelling neuroinflammation in vitro: a tool to test the potential neuroprotective effect of anti-inflammatory agents. PLoS One. 2012; 7(9): e45227. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrosjean D, Seinfeld JH: Parameterization of the formation potential of secondary organic aerosols. Atmos Environ (1967). 1989; 23(8): 1733–1747. Publisher Full Text\n\nHan MH, Yi HJ, Kim YS, et al.: Association between Diurnal Variation of Ozone Concentration and Stroke Occurrence: 24-Hour Time Series Study. PLoS One. 2016; 11(3): e0152433. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHanamsagar R, Bilbo SD: Environment matters: microglia function and dysfunction in a changing world. Curr Opin Neurobiol. 2017; 47: 146–155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHu S, Polidori A, Arhami M, et al.: Redox activity and chemical speciation of size fractioned PM in the communities of the Los Angeles-Long Beach harbor. Atmos Chem Phys. 2008; 8(21): 6439–6451. Publisher Full Text\n\nIgnarro LJ, Fukuto JM, Griscavage JM, et al.: Oxidation of nitric oxide in aqueous solution to nitrite but not nitrate: comparison with enzymatically formed nitric oxide from L-arginine. Proc Natl Acad Sci U S A. 1993; 90(17): 8103–8107. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaufman JD, Adar SD, Barr RG, et al.: Association between air pollution and coronary artery calcification within six metropolitan areas in the USA (the Multi-Ethnic Study of Atherosclerosis and Air Pollution): a longitudinal cohort study. Lancet. 2016; 388(10045): 696–704. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim KH, Jahan SA, Kabir E: A review on human health perspective of air pollution with respect to allergies and asthma. Environ Int. 2013; 59: 41–52. PubMed Abstract | Publisher Full Text\n\nKrafft AD: The use of glial data in human health assessments of environmental contaminants. Toxicology. 2015; 333: 127–136. PubMed Abstract | Publisher Full Text\n\nLandrigan PJ, Fuller R, Acosta NJR, et al.: The Lancet Commission on pollution and health. Lancet. 2018; 391(10119): 462–512. PubMed Abstract | Publisher Full Text\n\nLevesque S, Taetzsch T, Lull ME, et al.: Diesel exhaust activates and primes microglia: Air pollution, neuroinflammation, and regulation of dopaminergic neurotoxicity. Environ Health Perspect. 2011; 119(8): 1149–1155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi N, Hao M, Phalen RF, et al.: Particulate air pollutants and asthma. a paradigm for the role of oxidative stress in PM-induced adverse health effects. Clin Immunol. 2003; 109(3): 250–265. PubMed Abstract | Publisher Full Text\n\nLi N, Wang M, Bramble LA, et al.: The adjuvant effect of ambient particulate matter is closely reflected by the particulate oxidant potential. Environ Health Perspect. 2009; 117(7): 1116–1123. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLinares C, Culqui D, Carmona R, et al.: Short-term association between environmental factors and hospital admissions due to dementia in Madrid. Environ Res. 2017; 152: 214–220. PubMed Abstract | Publisher Full Text\n\nLovett C, Cacciottolo M, Shirmohammadi F, et al.: Dataset 1 in: Diurnal variation in the proinflammatory activity of urban fine particulate matter (PM2.5) by in vitro assays. F1000Research. 2018. Data Source\n\nMohanKumar SM, Campbell A, Block M, et al.: Particulate matter, oxidative stress and neurotoxicity. Neurotoxicology. 2008; 29(3): 479–488. PubMed Abstract | Publisher Full Text\n\nMorgan TE, Davis DA, Iwata N, et al.: Glutamatergic neurons in rodent models respond to nanoscale particulate urban air pollutants in vivo and in vitro. Environ Health Perspect. 2011; 119(7): 1003–1009. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaffari A, Hasheminassab S, Wang D, et al.: Impact of primary and secondary organic sources on the oxidative potential of quasi-ultrafine particles (PM0.25) at three contrasting locations in the Los Angeles Basin. Atmos Environ. 2015; 120: 286–296. Publisher Full Text\n\nSanders PG, Xu N, Dalka TM, et al.: Airborne brake wear debris: size distributions, composition, and a comparison of dynamometer and vehicle tests. Environ Sci Technol. 2003; 37(18): 4060–4069. PubMed Abstract | Publisher Full Text\n\nSeinfeld JH, Pandis SN: Atmospheric chemistry and physics: from air pollution to climate change. John Wiley & Sons. 2016. Reference Source\n\nShah AS, Langrish JP, Nair H, et al.: Global association of air pollution and heart failure: a systematic review and meta-analysis. Lancet. 2013; 382(9897): 1039–1048. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShirmohammadi F, Hasheminassab S, Wang D, et al.: Oxidative potential of coarse particulate matter (PM(10-2.5)) and its relation to water solubility and sources of trace elements and metals in the Los Angeles Basin. Environ Sci Process Impacts. 2015; 17(12): 2110–2121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSingh M, Jaques PA, Sioutas C: Size distribution and diurnal characteristics of particle-bound metals in source and receptor sites of the Los Angeles Basin. Atmos Environ. 2002; 36(10): 1675–1689. Publisher Full Text\n\nSternbeck J, Sjödin Å, Andréasson K: Metal emissions from road traffic and the influence of resuspension—results from two tunnel studies. Atmos Environ. 2002; 36(30): 4735–4744. Publisher Full Text\n\nSullivan AP, Weber RJ, Clements AL, et al.: A method for on-line measurement of water-soluble organic carbon in ambient aerosol particles: Results from an urban site. Geophys Res Lett. 2004; 31(13). Publisher Full Text\n\nTao F, Gonzalez-Flecha B, Kobzik L: Reactive oxygen species in pulmonary inflammation by ambient particulates. Free Radic Biol Med. 2003; 35(4): 327–340. PubMed Abstract | Publisher Full Text\n\nTurpin BJ, Lim HJ: Species contributions to PM2.5 mass concentrations: revisiting common assumptions for estimating organic mass. Aerosol Sci Tech. 2001; 35(1): 602–610. Publisher Full Text\n\nVerma V, Ning Z, Cho AK, et al.: Redox activity of urban quasi-ultrafine particles from primary and secondary sources. Atmos Environ. 2009; 43(40): 6360–6368. Publisher Full Text\n\nWang D, Pakbin P, Saffari A, et al.: Development and evaluation of a high-volume Aerosol-into-liquid collector for fine and ultrafine particulate matter. Aerosol Sci Tech. 2013a; 47(11): 1226–1238. Publisher Full Text\n\nWang D, Pakbin P, Shafer MM, et al.: Macrophage reactive oxygen species activity of water-soluble and water-insoluble fractions of ambient coarse, PM2.5 and ultrafine particulate matter (PM) in Los Angeles. Atmos Environ. 2013b; 77: 301–310. Publisher Full Text\n\nWoodward NC, Levine MC, Haghani A, et al.: Toll-like receptor 4 in glial inflammatory responses to air pollution in vitro and in vivo. J Neuroinflammation. 2017a; 14(1): 84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoodward NC, Pakbin P, Saffari A, et al.: Traffic-related air pollution impact on mouse brain accelerates myelin and neuritic aging changes with specificity for CA1 neurons. Neurobiol Aging. 2017b; 53: 48–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang H, Liu H, Davies KJ, et al.: Nrf2-regulated phase II enzymes are induced by chronic ambient nanoparticle exposure in young mice with age-related impairments. Free Radic Biol Med. 2012; 52(9): 2038–2046. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang Y, Schauer JJ, Shafer MM, et al.: Source apportionment of in vitro reactive oxygen species bioassay activity from atmospheric particulate matter. Environ Sci Technol. 2008; 42(19): 7502–7509. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "35515",
"date": "30 Jul 2018",
"name": "Ning Li",
"expertise": [
"Reviewer Expertise air pollution and allergic airway inflammation"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments This study examined the relationship between diurnal changes in the chemical properties and the pro-oxidant and pro-inflammatory activities of PM2.5 collected in Los Angeles area. The hypothesis was that compared to primary, morning PM (am-PM), secondary PM in the afternoon (pm-PM) would induce a stronger inflammatory response in microglial cells. PM were collected using an aerosol-into-liquid collection system. Cellular endpoints included nitric oxide induction and the expression of IL-1b, IL-6 and MCP-1 genes. Characterization of PM included analyses of metal and trace elements, total carbon and inorganic ions. The findings were contrary to the authors’ expectation. Morning PM had stronger effects in inducing NO production and up-regulating IL-1b, IL-6 and MCP-1 gene expression than pm-PM. It was concluded that the diurnal differences between am-PM and pm-PM may be caused by the greater content of transition metals and water-insoluble organic carbon of am-PM (primary PM). This work has two strengths: 1) diurnal changes in the chemical properties and adverse health effects of ambient PM have not been well studied and 2) the use of aerosol-into-liquid collection system reduces the loss of PM components.\n\nSpecific comments\n\nFig. 1. The highest PM concentration was 20 μg/ml. The authors indicated that “The 10 μg/ml dose was chosen as below threshold for metabolic impairment based on prior studies from our group”. Was metabolic impairment assessed at 20 μg/ml? If yes, was there any cellular injury? Were endotoxin levels in these PM samples measured? Was there any difference between am-PM and pm-PM? The authors explained the rationale for not using DTT and DCF-DA assays. What was the rationale for selecting NO instead of other indicators (e.g., HO-1 or GSH/GSSG) to assess oxidative stress?\n\nFig. 2. What are the reasons that there is no * above pm-PM2.5 in Figures 2A (IL-1b) and 2C (MCP-1)? Were pm-PM2.5-induced increases of IL-1b and MCP-1 significantly different from respective control? What is the p-value of am-PM2.5 vs. pm-PM2.5 in Fig. 2A?\n\nFig. 3A, 3B and 3C (As, Co, Cr, Fe, Mn and Ni). Are the differences between am-PM and pm-PM statistically significant?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3863",
"date": "01 Aug 2018",
"name": "Christopher Lovett",
"role": "Author Response",
"response": "Authors’ responses to specific comments of Dr. Ning Li:1. Fig. 1. The highest PM concentration was 20 μg/ml. The authors indicated that “The 10 μg/ml dose was chosen as below threshold for metabolic impairment based on prior studies from our group.” Was metabolic impairment assessed at 20 μg/ml? If yes, was there any cellular injury?Response: The MTT assay was conducted with BV-2 cells and included four doses (1, 5, 10, 20 μg/mL) of PM2.5 slurry samples, followed by 24 hours incubation time. Significant reductions in mitochondrial activity occurred only at the highest PM2.5 dose, 20 μg/mL, in both am-PM2.5 and pm-PM2.5 samples. However, this activity was still above the 50% threshold. Additionally, given strong cell responses at 20 μg/mL for NO and cytokines, we infer that cells were viable at the highest dose.2. Were endotoxin levels in these PM samples measured? Was there any difference between am-PM and pm-PM?Response: Prior studies using the Limulus assay did not detect endotoxin (Woodward et al., 2017) in ambient PM samples collected at the same central Los Angeles location. Additionally, all sample collection and measurement equipment was routinely sanitized with 70% ethanol solution prior to use each week, per our strict laboratory hygiene protocols.3. The authors explained the rationale for not using DTT and DCF-DA assays. What was the rationale for selecting NO instead of other indicators (e.g., HO-1 or GSH/GSSG) to assess oxidative stress?Response: Nitric oxide (NO) secretion, measured by the Griess assay, was chosen as a biomarker of the oxidative stress response based on its reliability as an index of PM-induced oxidative stress. Our research group has previously used this measure in several published studies of nPM exposure in vitro (e.g. Davis et al., 2013; Cheng et al., 2016).4. Fig. 2. What are the reasons that there is no * above pm-PM2.5 in Figures 2A (IL-1b) and 2C (MCP-1)? Were pm-PM2.5-induced increases of IL-1b and MCP-1 significantly different from respective control? What is p-value of am-PM2.5 vs. pm-PM2.5 in Fig. 2A? Response: In Figure 2A (IL-1β), the response to pm-PM2.5 was 3.3-fold above control, with marginal significance (p = 0.14). The IL-1β response to am-PM2.5 was 1.5-fold above the pm-PM2.5 response (not significant, p = 0.41). In Figure 2C (MCP-1), the response to pm-PM2.5 was 1.3-fold above control (not significant, p = 0.44).5. Figs. 3A, 3B and 3C (As, Co, Cr, Fe, Mn and Ni). Are the differences between am-PM and pm-PM statistically significant?Response: The chemical data presented in Figures 3A, 3B, and 3C (total carbon, inorganic ions, total metals/trace elements) include error bars representing laboratory measurement uncertainty. However, because only one time-integrated slurry sample (collected over several hours, from either 6-9am or 12-4pm, during each weekday for 5 weeks) was analyzed in each condition (n = 1), it was not possible to do an ANOVA to determine the statistical significance of differences in concentrations of various chemical species between the am-PM2.5 and pm-PM2.5 samples."
}
]
},
{
"id": "36102",
"date": "03 Sep 2018",
"name": "Kent E. Pinkerton",
"expertise": [
"Reviewer Expertise Inhalation toxicology of gases",
"particles and fibers"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe research article by Lovett and colleagues, Diurnal variation in the proinflammatory activity of urban fine particulate matter (PM2.5) by in vitro assays”, is a fascinating and provocative article, however, not without some controversy in its interpretation.\n\nThe authors state ambient PM2.5 undergoes diurnal changes in chemical composition due to photochemical oxidation. In this study the authors examined the relationships between oxidative activity and inflammatory responses associated with these diurnal chemical changes. Because secondary PM contains a higher fraction of oxidized PM species, the authors hypothesize PM2.5 collected during afternoon hours induce a greater inflammatory response than primary, morning PM2.5.\n\nThe methods used for PM collections methods are highly appropriate and well executed. The in vitro biological methods implemented with immortalized microglia cells are logical and clearly described. The authors state microglia were used for in vitro assays of oxidative and inflammatory responses to PM2.5 exposures because of their increasingly recognized role in environmental neurotoxicology. These are all correct statements. The difficulty comes in the interpretation of the findings.\n\nSpecific Comments:\n\nIt is unclear how the dose of PM delivered to microglia in vitro would compare to the in vivo setting where PM and/or oxidation reaction products would need to be transported either through the olfactory epithelium of the nasal cavity and/or via the blood passing through the lungs. The authors need to provide further clarification as to the credibility of using microglia cells in culture to mimic neurotoxicology. The authors need to clarify these concerns.\n\nA five week sampling campaign was conducted by the authors during late summer (August and September) of 2016, ensuring maximum UV sunlight exposure to enhance photochemical oxidation reactions seems reasonable. However, how was the stability of PM samples maintained over such a prolonged period of sampling of five weeks? How stable are photochemical oxidation reactions?\n\nPM2.5 slurry samples were assayed with immortalized BV-2 microglia. It is unclear in the text what was the concentration of PM2.5 slurry samples used for these assays. Please state in the text.\n\nBased on Figure 1, were 1, 5 and 20 um/ul of PM2.5 used? How do these doses compare to the concentration of PM2.5 actually reaching microglia cells in vivo?\n\nNitric oxide (NO) was assayed in BV-2 cell media by the Griess reagent at 30, 60 minute and 24 hour timepoints. A NaNO2 stock solution used to allow the creation of a standardization curve to determine the NO concentration in each treatment well from measured absorbance data. The authors observed a dose-dependent NO response to PM2.5 treatments relative to control at all timepoints (30 min., 60 min., 24 hr.). This assay demonstrated greater effects for morning PM2.5 than for afternoon PM2.5. This assay is quite remarkable, meritorious and clearly illustrated. The interpretation of these findings need to be clearly stated.\n\nSummary and Conclusion. The first portion of this section is nicely written to state “urban PM2.5 collected during the morning rush hour (6–9am), when primary, traffic-derived PM emissions are dominant, induces greater oxidative and proinflammatory responses in cells as compared to PM2.5 collected in the afternoon (12–4pm), which contains a higher proportion of photo-oxidized, secondary PM products”. It is unclear whether the authors have provided conclusive evidence of these diurnal differences in terms of PM chemistry. Please clarify.\n\nSummary and Conclusion. The authors state, “Two in vitro assays of the cellular inflammatory response consistently demonstrated greater oxidative and proinflammatory activity due to primary (morning) PM2.5 exposure. We attribute this effect to the greater transition metal and water-insoluble organic carbon (WIOC) content of primary PM2.5, two classes of PM components that increase toxicity. Again, how conclusive is the chemical analysis of PM from these two periods for PM2.5 collected over a period of five weeks?\n\nSummary and Conclusion. The authors state, “This research will ultimately help us gain a more complete understanding of the complex nature of particulate matter and how its composition and proinflammatory effects change over time due to photochemical aging in the atmosphere. The Southern California climate of Los Angeles with abundant sunshine, compounded with dense vehicular traffic, generates ubiquitous primary and secondary PM throughout the year. Identifying the health effects of these pollutants is critical as we strive to understand the underlying mechanisms of PM-induced oxidative stress, neuroinflammation and associated morbidity”. This is a laudatory conclusion made by the authors using in vitro cells that may or may not represent in vivo conditions of cell response and/or PM dose delivered to the nervous system. The authors need to clearly state the advantages of their study, along with the limitations for interpretation and extrapolation to actual in vivo conditions of exposure.\n\nSummary and Conclusion. The authors state, “Our findings may help in further elucidating the role of PM in the etiology, onset and development of widespread, chronic diseases that plague urban populations, including cancer, cardiac and respiratory distress, and neurodegenerative disorders such as Alzheimer’s disease”. Should the authors acknowledge the limitations of the methods used in this study, this concluding statement is reasonable.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "3999",
"date": "01 Oct 2018",
"name": "Christopher Lovett",
"role": "Author Response",
"response": "Authors’ responses to specific comments of Dr. Kent Pinkerton: Comment 1: It is unclear how the dose of PM delivered to microglia in vitro would compare to the in vivo setting where PM and/or oxidation reaction products would need to be transported either through the olfactory epithelium of the nasal cavity and/or via the blood passing through the lungs. The authors need to provide further clarification as to the credibility of using microglia cells in culture to mimic neurotoxicology. The authors need to clarify these concerns. Authors’ Response: While we acknowledge the limitations of in vitro assays, we believe the reliability of the in vitro assays in characterizing neurotoxicology, as compared to in vivo experiments, has been adequately documented in the several studies cited in the paper (e.g. Morgan, et al., 2011; Davis et al., 2013; Cheng et al., 2016b; Woodward et al., 2017b). Additionally, the specific use of cultured BV-2 cells (microglia) successfully in in vitro assays of neurotoxicity has also been demonstrated, as cited in the paper (e.g. Gresa-Arribas et al., 2012; Eun et al., 2017). Comment 2: A five-week sampling campaign was conducted by the authors during late summer (August and September) of 2016, ensuring maximum UV sunlight exposure to enhance photochemical oxidation reactions seems reasonable. However, how was the stability of PM samples maintained over such a prolonged period of sampling of five weeks? How stable are photochemical oxidation reactions? Authors’ Response: The PM samples were collected daily over the course of 5 weeks using the direct aerosol-into-liquid sampling system (Wang et al., 2013a). At the end of each sampling period, morning and afternoon, each daily aqueous sample was added to a total sample collection bottle kept under refrigeration. Once collected and refrigerated, these cumulative aqueous samples were used in the in vitro experiments discussed in the paper. It is possible that changes in PM composition might occur during sampling, but this is the advantage of using the direct aerosol-into-liquid system as opposed to filter-based sampling. PM is collected directly into an aqueous suspension and does not have to undergo an aqueous extraction and re-suspension process, which thereby significantly reduces the possibility of any artifact formation. The benefits of this collection method compared to conventional filter samplers are discussed in several prior publications (Zhao et al., 2005; Wang et al., 2013b; Saarikoski et al., 2014). Added text (Methods: Particulate sample collection): “At the end of each morning and afternoon daily sampling period, each aqueous slurry sample was added to its corresponding total sample collection bottle that was kept refrigerated. At the end of the 5-week sampling period, these continuously refrigerated, cumulative aqueous slurry samples were then used in the in vitro assays. While it is possible that changes in PM composition might occur during sampling, the advantage of using the direct aerosol-into-liquid system is that PM is collected directly into an aqueous suspension and does not undergo an aqueous extraction and re-suspension process, thereby significantly reducing the possibility of any artifact formation. The benefits of this collection method compared to conventional filter sampling systems has been discussed extensively in the literature (e.g. Zhao et al., 2005; Wang et al., 2013b; Saarikoski et al., 2014).” Additional References (added to paper): Zhao, Y., Bein, K. J., Wexler, A. S., Misra, C., Fine, P. M., & Sioutas, C. (2005). Field evaluation of the versatile aerosol concentration enrichment system (VACES) particle concentrator coupled to the rapid single‐particle mass spectrometer (RSMS‐3). Journal of Geophysical Research: Atmospheres, 110(D07S02). Saarikoski, S., Carbone, S., Cubison, M. J., Hillamo, R., Keronen, P., Sioutas, C., Worsnop, D. R., & Jimenez, J. L. (2014). Evaluation of the performance of a particle concentrator for online instrumentation. Atmospheric Measurement Techniques, 7(7), 2121-2135. Comment 3: PM2.5 slurry samples were assayed with immortalized BV-2 microglia. It is unclear in the text what was the concentration of PM2.5 slurry samples used for these assays. Please state in the text. Authors’ Response: In the Methods section, the concentrations of PM2.5 slurry samples used in each assay are stated, including the nitrite assay (Methods section: Nitrite Assay), which used 1, 5 and 20 µg/mL, and the qPCR assay (Methods section: Quantitative Polymerase Chain Reaction (qPCR)), which used 10 µg/mL. Comment 4: Based on Figure 1, were 1, 5 and 20 μg/ml of PM2.5 used? How do these doses compare to the concentration of PM2.5 actually reaching microglia cells in vivo? Authors’ Response: In the Nitrite Assay (Figure 1), concentrations of 1, 5 and 20 µg/mL were used. In the qPCR assay (Figure 2), the PM2.5 concentration used was 10 µg/mL. We do not know the concentration of PM2.5 reaching microglia in the brain following ambient exposure, however the in vitro assays were used as a model for how brain cells respond to PM2.5 at the given concentrations. The focus of the paper was to investigate the differences between morning (primary-dominated) and afternoon (secondary-dominated) PM2.5, rather than quantifying the actual exposure concentrations, and resulting CNS concentrations, that would be considered harmful. We modeled these interactions between PM2.5 and microglia using concentrations that are below the threshold for cell death, as evaluated by the MTT assay. While there is evidence that particles can directly enter the brain through the olfactory tract (Oberdörster et al., 2004), the concentration of particles interacting directly with brain cells via this route has not been quantified, and thus comparisons could not be made. Added text (Discussion section: paragraph 2): “While there may be a concern that the concentrations of PM2.5 treatments used in the in vitro assays do not reflect the exact concentrations of PM2.5 reaching microglia in the brain following ambient exposures, these assays do serve as a useful model for how brain cells in living organisms would respond to PM2.5 at the given concentrations (i.e. 1, 5, 10, and 20 μg/mL). The focus of the paper was to investigate the differences between morning (primary-dominated) and afternoon (secondary-dominated) PM2.5, rather than to quantify actual exposure concentrations, and subsequent CNS concentrations, that would be considered harmful. We modeled these interactions between PM2.5 and microglia using concentrations that are below the threshold for cell death, as evaluated by the MTT assay. While there is evidence that particles can directly enter the brain through the olfactory tract (Oberdörster et al., 2004), and thus perhaps maintain higher concentrations than PM2.5 passing through the periphery, the concentration of particles interacting directly with brain cells via this route has not been quantified, and thus comparisons to these results could not be made.” Additional Reference (added to paper): Oberdörster, G., Sharp, Z., Atudorei, V., Elder, A., Gelein, R., Kreyling, W., & Cox, C. (2004). Translocation of inhaled ultrafine particles to the brain. Inhalation Toxicology, 16(6-7), 437-445. Comment 5: Nitric oxide (NO) was assayed in BV-2 cell media by the Griess reagent at 30, 60 minute and 24 hour timepoints. A NaNO2 stock solution used to allow the creation of a standardization curve to determine the NO concentration in each treatment well from measured absorbance data. The authors observed a dose-dependent NO response to PM2.5 treatments relative to control at all timepoints (30 min., 60 min., 24 hr.). This assay demonstrated greater effects for morning PM2.5 than for afternoon PM2.5. This assay is quite remarkable, meritorious and clearly illustrated. The interpretation of these findings need to be clearly stated. Authors’ Response: To clarify these findings, we offer the flowing additional text: Added text (Results: Nitric Oxide (NO)): “The acute effects of PM exposure seen within the first hour of exposure, at 30 and 60 minutes post-treatment, are due to direct NO induction, while the sustained effect still measurable after 24 hours indicates that there has been upregulation of the iNOS enzyme that produces NO. Thus, this overall effect is two-fold, with the increase in NO secretion due to PM2.5 exposure mediated by two distinct mechanisms, acute and delayed.” Comment 6: Summary and Conclusion. The first portion of this section is nicely written to state “urban PM2.5 collected during the morning rush hour (6-9am), when primary, traffic-derived PM emissions are dominant, induces greater oxidative and proinflammatory responses in cells as compared to PM2.5 collected in the afternoon (12-4pm), which contains a higher proportion of photo-oxidized, secondary PM products.” It is unclear whether the authors have provided conclusive evidence of these diurnal differences in terms of PM chemistry. Please clarify. Authors’ Response: It is well-known that over the course of daytime sunlight exposure, photochemical reactions occur, and secondary PM components, largely products of primary emissions oxidation, are formed such that secondary PM has a different chemical composition than primary PM. This is discussed in several places in the text, and relevant studies were cited (e.g. Forstner et al., 1997; Grosjean & Seinfeld, 1989; Seinfeld & Pandis, 2016). That the observed diurnal variations in cellular oxidative stress, as measured by two distinct in vitro assays, are related to higher concentrations of PM components associated with primary PM (i.e. transition metals and WIOC) was also discussed, and several studies were cited in support of this conclusion (e.g. Tao et al., 2003; Li et al., 2009; Cho et al., 2005; Zhang et al., 2008; Hu et al., 2008; Shirmohammadi et al., 2015). Comment 7: Summary and Conclusion. The authors state, “Two in vitro assays of the cellular inflammatory response consistently demonstrated greater oxidative and proinflammatory activity due to primary (morning) PM2.5 exposure. We attribute this effect to the greater transition metal and water-insoluble organic carbon (WIOC) content of primary PM2.5, two classes of PM components that increase toxicity.” Again, how conclusive is the chemical analysis of PM from these two periods for PM2.5 collected over a period of five weeks? Authors’ Response: We feel that we have provided ample and solid evidence of the compositional differences between these two sets of samples (am-PM2.5 and pm-PM2.5). We respectfully disagree that we have not provided clear evidence of these compositional differences of the two sets of samples based on the chemical analyses, specifically regarding both their organic carbon content as well as the toxic trace elements and metals components, as can be seen in Figure 3, as well as in the supplemental chemical analyses data files. If the concern is the lengthy time-integrated sample collection period of 5 weeks leading to sample instability or degradation, the aerosol-into-liquid samples are exceptionally stable given the removal of a filter-extraction step in the analysis process, particularly when kept refrigerated, as discussed above in our response to comment 2. Comment 8: Summary and Conclusion. The authors state, “This research will ultimately help us gain a more complete understanding of the complex nature of particulate matter and how its composition and proinflammatory effects change over time due to photochemical aging in the atmosphere. The Southern California climate of Los Angeles with abundant sunshine, compounded with dense vehicular traffic, generates ubiquitous primary and secondary PM throughout the year. Identifying the health effects of these pollutants is critical as we strive to understand the underlying mechanisms of PM-induced oxidative stress, neuroinflammation and associated morbidity”. This is a laudatory conclusion made by the authors using in vitro cells that may or may not represent in vivo conditions of cell response and/or PM dose delivered to the nervous system. The authors need to clearly state the advantages of their study, along with the limitations for interpretation and extrapolation to actual in vivo conditions of exposure. Authors’ Response: One methodological advantage of this study, as stated in the text, is the use of the direct aerosol-into-liquid sampler, which allows us to capture more water-insoluble PM species that may not be eluted during water extraction of filters. Another advantage of this study is that we use two separate in vitro assays of proinflammatory biomarkers (the nitrite assay and qPCR). Additionally, while there are limitations in extrapolating in vitro findings to processes occurring in living organisms, the focus of this study was to compare the relative proinflammatory effects of direct cellular exposure to morning and afternoon PM2.5 rather than to precisely quantify these effects in vivo. Additionally, studies of direct interactions between PM2.5 and microglia in vivo have not been conducted for comparison with our (and numerous other) in vitro studies. Comment 9: Summary and Conclusion. The authors state, “Our findings may help in further elucidating the role of PM in the etiology, onset and development of widespread, chronic diseases that plague urban populations, including cancer, cardiac and respiratory distress, and neurodegenerative disorders such as Alzheimer’s disease”. Should the authors acknowledge the limitations of the methods used in this study, this concluding statement is reasonable. Authors’ Response: Regarding the limitations of methods used in this study, the responses to comments and additional text incorporated into the revised manuscript, as detailed in the responses to comments above, should provide sufficient to address these concerns."
}
]
}
] | 2
|
https://f1000research.com/articles/7-596
|
https://f1000research.com/articles/7-1577/v1
|
28 Sep 18
|
{
"type": "Method Article",
"title": "Large-scale protein function prediction using heterogeneous ensembles",
"authors": [
"Linhua Wang",
"Jeffrey Law",
"Shiv D. Kale",
"T. M. Murali",
"Gaurav Pandey",
"Linhua Wang",
"Jeffrey Law",
"Shiv D. Kale",
"T. M. Murali"
],
"abstract": "Heterogeneous ensembles are an effective approach in scenarios where the ideal data type and/or individual predictor are unclear for a given problem. These ensembles have shown promise for protein function prediction (PFP), but their ability to improve PFP at a large scale is unclear. The overall goal of this study is to critically assess this ability of a variety of heterogeneous ensemble methods across a multitude of functional terms, proteins and organisms. Our results show that these methods, especially Stacking using Logistic Regression, indeed produce more accurate predictions for a variety of Gene Ontology terms differing in size and specificity. To enable the application of these methods to other related problems, we have publicly shared the HPC-enabled code underlying this work as LargeGOPred (https://github.com/GauravPandeyLab/LargeGOPred).",
"keywords": [
"protein function prediction",
"heterogeneous ensembles",
"machine learning",
"high-performance computing",
"performance evaluation"
],
"content": "Introduction\n\nGiven the large and rapidly growing gap between sequenced genomes and experimentally determined functional annotations of the constituent proteins, the automation of protein function prediction (PFP) using computational tools is critical1,2. However, diverse data sources, data quality issues, like noise and incompleteness, and a lack of consensus on the best predictor(s) for various types of data and functions pose serious challenges for PFP. Specifically, data types used by existing PFP methods have included amino acid sequences, protein structure information, gene expression profiles and protein-protein interaction networks. Similarly, prediction methodologies have ranged from homology-based sequence alignment to machine learning algorithms, network-based methods, and others. Several community-based critical assessments, especially CAFA3,4, have been organized to objectively measure the performance of these diverse PFP methods. A central finding from these assessments was the variable performance of the tested methods/predictors for different functional terms from the Gene Ontology (GO)5,6 and target proteins, demonstrating that there is no ideal predictor of all types of protein function.\n\nA potential approach for improving prediction performance in such a scenario of diverse data types and individual/base predictors is to build heterogeneous ensembles7. These ensembles harness the consensus and diversity among the base predictors, and can help reduce potential overfitting and inaccuracies incurred by them. Unsupervised methods like majority vote and mean aggregation, and supervised approaches like stacking and ensemble selection are the most commonly used methods for building heterogeneous ensembles. Stacking builds such an ensemble by learning a function, also known as a meta-predictor, that optimally aggregates the outputs of the base predictors8. Ensemble selection methods iteratively add one or more base predictors to the current ensemble either greedily or to improve the overall diversity and performance of the ensemble9–11. These approaches have been successfully applied to a variety of prediction problems12–15.\n\nIn previous work7, we tested the efficacy of heterogeneous ensembles for annotating approximately 4,000 Saccharomyces cerevisiae proteins with GO terms. For this, we evaluated stacking using logistic regression as the meta-predictor and Caruana et al.’s ensemble selection (CES) algorithm9,10, both implemented in our open-source package DataSink. The implementation uses a nested cross-validation setup7 to train the base predictors and the ensembles independently with the aim of reducing overfitting16 and improving prediction performance. These experiments yielded that both CES and stacking performed significantly better than stochastic gradient boosting17, the best-performing base predictor for all the GO terms considered. This improvement was observed both in terms of the AUC score, as well as the Fmax measure, which has been established to be more relevant for PFP evaluation3,4.\n\nA major limitation of this previous study was the relatively high computational cost of constructing heterogeneous ensembles, despite their high-performance computing (HPC)-enabled implementations in DataSink. Due to this cost, we were able to test the ensembles’ performance on only three GO terms for proteins of only one organism (S. cerevisiae). Owing to the same limitation, only logistic regression was tested as the meta-predictor for stacking. Thus, despite the initial encouraging results, it remains unclear if heterogeneous ensembles provide the same improvement over individual base predictors for a substantial part of GO as well as for a large number of proteins from multiple organisms.\n\nThe overall goal of this study is to critically assess this ability of heterogeneous ensembles to improve PFP at a large scale across a multitude of functional terms, proteins and organisms. For this, we adopt an HPC-enabled strategy to evaluate heterogeneous ensembles, built using CES and stacking with eight meta-prediction algorithms, for large-scale PFP. This evaluation is conducted over 277 GO terms, and more than 60,000 proteins, from 19 pathogenic bacterial species. Specifically, we analyze the following aspects of of heterogeneous ensembles:\n\n1. Prediction performance compared to that of the best-performing individual predictor for each GO term.\n\n2. How this performance varies for different GO terms categorized by:\n\n(a) Number of genes annotated to each term (size).\n\n(b) Different depths in the GO hierarchy (levels of specificity).\n\nWe expect the results of this study to shed light on the efficacy of heterogeneous ensembles for large-scale protein function prediction. To enable the application of these ensembles to other related problems, we have publicly shared the HPC-enabled code underlying this work as LargeGOPred.\n\n\nMethods\n\nWe extracted the amino acid sequences of 63,449 proteins from 19 clinically relevant bacterial pathogens, which include a subset of organisms from the Health and Human Services (HHS) list of select agents and those with current high clinical relevance18,19. The annotations of these proteins to GO terms used in this study were either inferred by a curator (evidence codes: ISS, ISO, ISA, ISM, IGC, IBA, IBD, IKR, IRD, RCA, TAS, NAS and IC) or from experiments (evidence codes: EXP, IDA, IPI, IMP, IGI and IEP), but not from electronic annotations (IEA) in the UniProt database20. We selected 277 molecular function (MF) and biological process (BP) GO terms with more than 200 annotated proteins across all the 19 bacteria. The constantly changing contents of the GO ontology and annotations, as well as our incomplete knowledge of the latter make it possible for sequences not annotated to a GO term to be annotated in the future. Thus, to prepare more well-defined datasets, for each GO term, we defined proteins annotated to it as positive samples and any proteins that are neither annotated to the GO term nor its ancestors or descendants as negative samples21. The resultant distributions of GO terms with regard to the number of proteins positively annotated to them for each organism and across all organisms are shown in Table 1.\n\nThe ‘#Proteins’ column shows the number of proteins in the corresponding bacterial pathogen listed in the ‘Organism’ column. The disease(s) each of these pathogens has been implicated in are listed in the ‘Disease(s)’ column. The ‘Distribution of GO terms’ column with 3 sub-columns shows the number of proteins annotated with GO terms with that range of #annotations, with the corresponding number of GO terms shown in parenthesis. The final row of the table shows the total number of proteins and GO terms considered in this study. Ranges of distributions of GO terms for all species are shown in the parenthesis of the three ‘#annotations’ sub-columns. Since each GO term is considered independently, each protein may be counted as annotated to multiple GO terms.\n\nWe chose normalized k-mer frequencies, extracted using the khmer package (2.1.1)22, as our feature set to represent the information contained in the amino acid sequences and construct a feature matrix that can serve as input for LargeGOPred. K-mers have been used for similar purposes in several PFP studies1, as well as related problems like the prediction of protein secondary structure23 and RNA-protein interactions24. Since the size of the feature set (all possible k-mers) grows rapidly with increasing value of k, setting k to a high value may be impractical for large-scale PFP tasks like ours. Additionally, 1- and 2-mers may not provide enough context information about the sequence. Thus, we set k = 3 since this value strikes a balance between the information captured by the k-mers and computational scalability. For each amino acid sequence, we extracted frequencies for all possible 8,000 3-mers at each position of the sequence. We then normalized these frequencies by the length of the sequence to reduce the potential bias due to the variation of sequence lengths among the proteins.\n\nAll the processed data are available from https://zenodo.org/record/1434450#.W6lU2hNKhBx (doi: 10.5281/zenodo.1434450)25.\n\nThe overall approach adopted for this study is visualized and described in detail in Figure 1. Two key components of the approach, specifically the heterogeneous ensemble methods used and nested cross-validation, are described in the following subsections, as well in our previous work7. The prediction performance of all the predictors tested in this study, specifically the base classifiers and ensembles, was evaluated in terms of the Fmax measure, which is the maximum value of F-measure26 across all binarization thresholds, and has been recommended as a PFP evaluation measure by CAFA3,4. We also evaluated the statistical significance of the difference between the performance of the various predictors (described below)27. Finally, since we approach GO term prediction as a binary classification problem, the terms “predictor” and “classifier”, and their variants will be used interchangeably as appropriate in the rest of the paper.\n\nWe first extracted normalized 3-mer frequencies from the amino acid sequences as features. Training data for 12 types of base classifiers (upper half of Table 2) were randomly under-sampled into 10 bags containing equal numbers of positive and negative samples to address class imbalance and to introduce diversity among base classifiers, even among those of the same type. The predictions from these bags were averaged for each base classifier and collected to train the heterogeneous ensembles using three types of methods, namely mean aggregation, 8 stacking meta-classifiers (bottom half of Table 2), and Caruana et al.’s ensemble selection (CES). Separate test data were used to evaluate the heterogeneous ensembles. The entire process was conducted within a nested cross-validation setup (described below) executed for each target GO term separately.\n\nThe base and meta-classifiers were adopted from Weka28 and scikit-learn30 respectively.\n\nWe used 12 diverse base predictors from the Weka machine learning suite (3.7.10)28 (upper half of Table 2) and built 3 types of unsupervised and supervised heterogeneous ensembles on top of them. The unsupervised mean method simply takes the average of the predictions from base classifiers as the final prediction. For supervised heterogeneous ensembles, we tested various stacking methods and one of the most widely used ensemble selection methods, namely CES.\n\nStacking. Stacking builds a heterogeneous ensemble by learning a meta-classifier that optimally aggregates the outputs of the base predictors. Unlike our previous study, where only stacking using logistic regression as the meta-classifier was tested, we used 8 different meta-classifiers in this study (bottom half of Table 2), and statistically compared their performance over all the target prediction problems.\n\nEnsemble selection and CES. Ensemble selection is a process to selecting a subset of all the base classifiers that are mutually complementary such that the resultant ensemble is as predictive as possible.\n\nIn this study, we tested Caruana et al’s ensemble selection (CES) algorithm for large-scale PFP9,10. CES is an iterative algorithm that starts with an empty ensemble, and in each iteration, adds the base predictor that best improves the resultant ensemble’s performance, partly due to the added predictor’s complementarity to the current ensemble. The process continues until the ensemble’s performance doesn’t improve anymore, or even starts decreasing. In this work, we tested the version of CES in which the base predictor to be added to the ensemble was sampled with replacement in each iteration9.\n\nCross validation (CV) is a frequently used methodology for training and testing classifiers and other predictors29. However, in the case of learning supervised ensembles like ours that involve two rounds of training (first the base classifiers and then the ensembles), using standard cross-validation may lead to overfitting of the ensemble. Thus, as explained in our previous work7, we devised a nested cross-validation procedure to be used for training and testing supervised ensembles. In this procedure, the entire dataset was split into outer training and test CV splits and each outer training split was further divided into inner CV folds. Base classifiers were trained on the inner training split and used to predict on the corresponding inner test split. Predictions made by the base classifiers were collected across all inner testing folds and used as the base data to train the heterogeneous ensembles. The outer test splits were then used to evaluate the performance of the trained ensembles. The nested cross-validation strategy ensures that the base classifiers and ensembles are trained on separate subsets of the data set, thus reducing the chances of bias and overfitting.\n\nWe addressed the potentially high computational costs by parallelizing all the independent units of the nested CV process, namely the training and testing of base and ensemble predictors over all the inner and outer CV splits. These units were then executed on separate processors in a large HPC cluster, with the outputs of inner CV folds flowing into the outer ones as described in our earlier work7. We have made this HPC-enabled implementation of the heterogeneous ensemble PFP process publicly available as LargeGOPred.\n\nIn this study, we compared multiple heterogeneous ensembles and base classifiers on their ability to predict annotations to a large number of GO terms. In such situations, it is critical to assess the statistical significance of these numerous comparisons to derive reliable conclusions. For this, we used Friedman’s and Nemenyi’s tests and visualized their results in easily interpretable critical difference (CD) diagrams27. Friedman’s test ranks all the tested classifiers over all datasets (here, GO terms) and tests if the mean ranks of all classifiers are statistically equivalent, while Nemenyi’s test performs the equivalent of multiple hypothesis correction for these comparisons. We used the scmamp (0.3.2)31 R package to perform these tests and visualize their results as CD diagrams.\n\n\nResults\n\nWe first evaluated if and to what extent heterogeneous ensembles enable the prediction of protein function as compared to individual predictors. Figure 2 shows the results of this evaluation in terms of the difference of the performance of a variety of ensembles from that of the best base classifier for each GO term, with the terms themselves categorized by their sizes. Although there is substantial variability in the values of ∆Fmax across ensemble methods and GO term categories, some trends can still be observed. First, the values of ∆Fmax across ensembles increase as the sizes of the GO terms considered also increase. This is illustrated by the fact that zero, one (Stacking with Logistic Regression) and four (CES and Stacking with Logistic Regression, Random Forest and Naive Bayes) ensembles produce ∆Fmax>0 for every GO term tested in the small, medium and large categories (from left (a) to right (c) in Figure 2). This trend is expected, since the availability of more positively annotated genes in the larger GO terms enhances the ability of the ensembles, especially the supervised ones, to improve PFP performance. Due to the same reason of more training data, the variability of PFP performance for the large terms, represented by the widths of the boxes and whiskers, is smaller, illustrating increased robustness of the ensembles.\n\nThe Y-axis shows all heterogeneous ensembles tested, specifically mean (aggregation), Caruana et al.’s ensemble selection (CES) and 8 stacking methods using different meta-classifiers named here. The X-axis denotes the difference between the Fmax of each heterogeneous ensemble and the best base classifier for each GO term (∆Fmax), which are categorized into (a) 152 small, (b) 71 medium and (c) 54 large GO terms with 200-500, 500-1000 and over 1000 annotated sequences in our dataset (Table 1). The broken vertical red line in each subplot represents ∆Fmax=0.\n\nTo analyze these results in further detail and derive reliable conclusions from them, we used Friedman’s and Nemenyi’s tests to statistically assess the ∆Fmax values shown in Figure 2. Figure 3 shows the results of these tests visualized as Critical Difference (CD) diagrams for the three categories of GO terms shown in Figure 2A–C, as well as all of them taken together (Figure 2D). These results show that several heterogeneous ensemble methods, such as LR.S, NB.S, Mean, RF.S, CES and SGD.S, performed better than the respective best base classifier in terms of their average rank27. In contrast, KNN.S and DT.S performed worse than the best base classifier for each category of GO terms considered.\n\nIn these diagrams, PFP methods, represented by vertical+horizontal lines, are displayed from left to right in terms of the average rank obtained by their resultant models for each GO term included. The groups of methods producing statistically equivalent performance are connected by horizontal lines. (A)–(C) show the CD diagrams for the three categories of GO terms shown in Figure 2, while (D) shows the one for all the 277 GO terms considered in this study. The scmamp R package31 was used to perform the Friedman and Nemenyi’s tests and plot the CD diagrams. Meta-classifiers used within stacking are denoted by their commonly used acronyms, e.g. LR for Logistic Regression, appended with “.S”.\n\nA consistent observation from Figure 3 is that Stacking using Logistic Regression (LR.S) performed the best among all the tested predictors (leftmost entry in the CD diagrams) regardless of the GO term category considered. It performed statistically equivalently with NB.S and CES for the small (Figure 3A) and large (Figure 3C) GO terms respectively, statistically confirming the observations made from Figure 2. In particular, LR.S exclusively performed the best among all the predictors over all the GO terms examined, consistent with its good performance over a limited number of GO terms in our previous work7. Thus, we further analyzed the performance of this predictor across the hierarchical structure of the Gene Ontology.\n\nGO terms are not a flat set of labels, but are rather organized in hierarchical ontologies structured as directed acyclic graphs (DAGs)5,6. Terms vary in their depth, or level, with deeper terms representing more specific functions as compared to those at shallower levels. Using the definition of the level of a GO term as the length of the shortest path to it from the root of the hierarchy, implemented in the GOATOOLS python package (0.8.4)32, we observed that the levels of the terms in our dataset varied between 1 and 8 (Figure 4(A)). In terms of the number of genes annotated, as expected, most of the annotations are to the shallower GO terms and only a small number to the deeper ones (Figure 4(B)).\n\n(A) and (B) show the distributions of the number of GO terms and the number of genes annotated to these terms at different levels respectively. (C) and (D) show the distributions of LR.S’s Fmax scores and their differences from the corresponding scores of the best classifier (∆Fmax) for these GO terms at the various levels.\n\nWe analyzed the ability of LR.S to predict annotations to these terms, measured in terms of Fmax, at different levels (Figure 4(C)). The performance is reasonably high at level 1, but decreases gradually until level 6 due to fewer annotations available for training the base classifiers and ensembles (Figure 4(B)). The performance improves slightly at levels 7 and 8, likely due to the increased specificity of the corresponding terms and thus better signal in the corresponding training data.\n\nFinally, we analyzed how LR.S’s performance compared with that of the best classifier for the tested GO terms at different levels of the hierarchy. For this, we calculated and plotted in Figure 4(D) the same ∆Fmax measure shown in Figure 2, this time categorized by levels. The results in Figure 4(D) show that ∆Fmax increases overall for GO terms at increasingly deeper levels in the hierarchy. The increases are statistically significant (Wilcoxon rank-sign test p-value<0.05) at levels 1–7, although not significant (p-value=0.17) for only two terms at level 8 (Figure 4(A)). These results indicate the benefit heterogeneous ensembles, specifically LR.S, can provide for deeper GO terms with fewer annotations where individual predictors may not be effective.\n\n\nDiscussion\n\nOwing to the diversity of available data types and computational methodologies, a variety of methods have been proposed for protein function prediction (PFP)1,2. CAFA3,4 and other large-scale assessment efforts demonstrated that there is no ideal method for predicting different types of functions. In this paper, we have demonstrated a potential approach to address this problem, namely assimilating individual methods/predictors into heterogeneous ensembles that may be more robust, generalizable and predictive across functions. Although we had provided preliminary results supporting this approach in our previous work7, those results were limited to predicting annotations to only three GO terms. In this paper, we report the first comprehensive and large-scale assessment of protein function prediction using heterogeneous ensembles. Specifically, using a data set of over 60,000 bacterial proteins annotated to almost 300 GO terms, we assessed how the mean aggregation, CES and stacking using multiple meta-classifiers performed for PFP.\n\nSeveral of the tested heterogeneous ensembles performed better than the best base/individual predictor for many of the GO terms examined. In particular, the performance improvements obtained by heterogeneous ensembles generally increased with more annotations available for a given GO term, i.e. its size, which can be expected due to the larger amount of more positive data available for training the base predictors and ensembles.\n\nA rigorous statistical comparison of all the heterogeneous ensembles and best base predictors tested over different categories of GO terms based on their sizes reaffirmed the effective performance of ensembles for PFP. In particular, Stacking using Logistic Regression (LR.S) was consistently the best-performing ensemble method across all the GO term categories, a finding consistent with our earlier work7. The effectiveness of LR.S can be attributed to the simplicity of the logistic regression function, which can help control overfitting at the meta-learning level during stacking. This effectiveness was also reflected in our observation that LR.S’s is increasingly more accurately predictive for GO terms deeper in the hierarchy, for which the small number of annotations available may adversely affect individual predictors. Overall, our study and results demonstrate the potential of heterogeneous ensembles to advance protein function prediction on top of the progress in individual predictors already being reported in CAFA3,4 and other exercises.\n\nA key feature of our work was the effective utilization of high-performance computing (HPC) to enable efficient large-scale PFP. Specifically, using a large number processors in a sizeable HPC cluster, we successfully built and evaluated heterogeneous ensembles for over 60,000 bacterial proteins annotated to almost 300 GO terms in under 48 hours. While this increase in efficiency is already appreciable, it can be improved further by utilizing more parallelized formulations of the process, such as using parallel implementations of base classification methods33 instead of the serial versions used in this work.\n\nAlthough the results of our study are encouraging, they were derived using data from only 19 pathogenic species due to our group’s general interest in PFP to better understand and predict annotated and unannotated pathogenicity in the context of clinically relevant bacteria. The inclusion of a larger number of and more diverse species, both prokaryotic and eukaryotic, in this evaluation can help assess how well our methods generalize to other species. The same can be said for including other types of data as well, such as the gene expression profiles used in our previous work7.\n\nWe also only used normalized k-mer frequencies derived from amino acid sequences to represent proteins. This could be extended to test other representations such as short linear motifs (SLiMs)34, hidden Markov models (HMMs)35 and learned protein embeddings36. Moreover, regardless of the representation, another potential issue is that highly conserved and thus similar sequences across the 19 species tested in this study might be separated into both the training and test sets, which may result in an overestimation of prediction performance. Though UniProt controls for within species redundancy, it does not remove redundancy between species, an issue also true for our dataset. To address this issue, non-redundant versions of UniProt, such as UniRef100 or UniRef9020, could be used to design more representative training and test sets. However, since the same prediction and evaluation process is used throughout our study, this issue should not adversely affect the fairness of the comparison between the performance of base predictors and heterogeneous ensembles.\n\nFinally, in this study, we considered GO terms as independent units of protein function, but they are actually related because of their organization in the hierarchical structure of GO. Information from ancestors and closely related siblings in the hierarchy may provide useful information for protein function prediction, including through heterogeneous ensembles. Previous work has utilized this information for advancing individual and ensemble PFP algorithms37–39, and similar ideas can be used to improve heterogeneous ensembles as well.\n\n\nData availability\n\nThe data underlying this study is available from Zenodo. Dataset 1: Data for LargeGOPred. http://doi.org/10.5281/zenodo.143445025\n\nThis dataset is available under a Creative Commons Attribution 4.0\n\n\nSoftware availability\n\nSource code underlying this work is available from GitHub: https://github.com/GauravPandeyLab/LargeGOPred\n\nArchived source code at time of publication http://doi.org/10.5281/zenodo.143432140\n\nLicense: GNU General Public License, version 2 (GPL-2.0)).",
"appendix": "Author contributions\n\n\n\nLW and GP conceived the study. LW carried out all the computational analyses and wrote the first draft of the manuscript. JL, SDK and TMM prepared the initial data used in the study and assisted with the evaluation of the results. GP supervised the work. All authors read, edited and approved the manuscript.\n\n\nGrant information\n\nThis work was supported in part by National Institutes of Health [R01GM114434] and by an IBM faculty award to GP. It was also partially supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the Army Research Office (ARO) under Cooperative Agreement Number [W911NF-17-2-0105]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NIH, ODNI, IARPA, ARO, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThis work was enabled by the computational resources and staff expertise provided by Scientific Computing at the Icahn School of Medicine at Mount Sinai. We would also like to thank members of the FunGCAT/IGACAT team, as well as colleagues at Mount Sinai, for discussions, suggestions and criticisms of this study.\n\n\nReferences\n\nPandey G, Kumar V, Steinbach M: Computational Approaches for Protein Function Prediction: A Survey. Technical Report 06-028, University of Minnesota, 2006. Reference Source\n\nSharan R, Ulitsky I, Shamir R: Network-based prediction of protein function. Mol Syst Biol. 2007; 3(1): 88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRadivojac P, Clark WT, Oron TR, et al.: A large-scale evaluation of computational protein function prediction. Nat Methods. 2013; 10(3): 221–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJiang Y, Oron TR, Clark WT, et al.: An expanded evaluation of protein function prediction methods shows an improvement in accuracy. Genome Biol. 2016; 17(1): 184. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAshburner M, Ball CA, Blake JA, et al.: Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat Genet. 2000; 25(1): 25–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe Gene Ontology Consortium: Expansion of the Gene Ontology knowledgebase and resources. Nucleic Acids Res. 2017; 45(D1): D331–D338. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhalen S, Pandey OP, Pandey G: Predicting protein function and other biomedical characteristics with heterogeneous ensembles. Methods. 2016; 93: 92–102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWolpert DH: Stacked Generalization. Neural Netw. 1992; 5(2): 241–259. Publisher Full Text\n\nCaruana R, Niculescu-Mizil A, Crew G, et al.: Ensemble selection from libraries of models. In Proceedings of the Twenty-first International Conference on Machine Learning. 2004; 18. Publisher Full Text\n\nCaruana R, Munson A, Niculescu-Mizil A: Getting the Most Out of Ensemble Selection. In Proceedings of the Sixth International Conference on Data Mining. 2006; 828–833. Publisher Full Text\n\nStanescu A, Pandey G: Learning Parsimonious Ensembles For Unbalanced Computational Genomics Problems. In Pac Symp Biocomput. 2017; 22: 288–299. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltmann A, Rosen-Zvi M, Prosperi M, et al.: Comparison of classifier fusion methods for predicting response to anti HIV-1 therapy. PLoS One. 2008; 3(10): e3470. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTuarob S, Tucker CS, Salathe M, et al.: An ensemble heterogeneous classification methodology for discovering health-related knowledge in social media messages. J Biomed Inform. 2014; 49: 255–268. PubMed Abstract | Publisher Full Text\n\nWang H, Zhao T: Identifying named entities in biomedical text based on stacked generalization. In Proceedings of the 7th World Congress on Intelligent Control and Automation. 2008; 160–164. Publisher Full Text\n\nNiculescu-Mizil A, Perlich C, Swirszcz G, et al.: Winning the KDD Cup Orange Challenge with Ensemble Selection. J Mach Learn Res. 2009; 7: 23–34. Reference Source\n\nVarma S, Simon R: Bias in error estimation when using cross-validation for model selection. BMC Bioinformatics. 2006; 7(1): 91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFriedman JH: Stochastic gradient boosting. Comput Stat Data Anal. 2002; 38(4): 367–378. Publisher Full Text\n\nCenters for Disease Control and Prevention (CDC), Department of Health and Human Services (HHS): Possession, Use, and Transfer of Select Agents and Toxins; Biennial Review of the List of Select Agents and Toxins and Enhanced Biosafety Requirements. Final rule. Fed Regist. 2017; 82(12): 6278–94. PubMed Abstract\n\nSantajit S, Indrawattana N: Mechanisms of Antimicrobial Resistance in ESKAPE Pathogens. BioMed Res Int. 2016; 2016: 2475067. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUniProt Consortium T: UniProt: the universal protein knowledgebase. Nucleic Acids Res. 2018; 46(5): 2699. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMostafavi S, Ray D, Warde-Farley D, et al.: GeneMANIA: a real-time multiple association network integration algorithm for predicting gene function. Genome Biol. 2008; 9 Suppl 1: S4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrusoe MR, Alameldin HF, Awad S, et al.: The khmer software package: enabling efficient nucleotide sequence analysis [version 1; referees: 2 approved, 1 approved with reservations]. F1000Res. 2015; 4: 900. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadera M, Calmus R, Thiltgen G, et al.: Improving protein secondary structure prediction using a simple k-mer model. Bioinformatics. 2010; 26(5): 596–602. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuppirala UK, Honavar VG, Dobbs D: Predicting RNA-protein interactions using only sequence information. BMC Bioinformatics. 2011; 12(1): 489. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLinhua W: Data for LargeGOPred [Data set]. Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1434450\n\nLever J, Krzywinski M, Altman N: Points of significance: classification evaluation. Nat Methods. 2016; 13: 603–604. Publisher Full Text\n\nDemsar J: Statistical Comparisons of Classifiers over Multiple Data Sets. J Mach Learn Res. 2006; 7: 1–30. Reference Source\n\nHall M, Frank E, Holmes G, et al.: The WEKA Data Mining Software: An Update. SIGKDD Explorations Newsletter. 2009; 11(1): 10–18. Publisher Full Text\n\nArlot S, Celisse A: A survey of cross-validation procedures for model selection. Stat Surv. 2010; 4: 40–79. Publisher Full Text\n\nPedregosa F, Varoquaux G, Gramfort A, et al.: Scikit-learn: Machine learning in Python. J Mach Learn Res. 2011; 12: 2825–2830. Reference Source\n\nCalvo B, Santafé G: scmamp: Statistical comparison of multiple algorithms in multiple problems. R J. 2016; 8/1. Reference Source\n\nKlopfenstein DV, Zhang L, Pedersen BS, et al.: GOATOOLS: A Python library for Gene Ontology analyses. Sci Rep. 2018; 8(1): 10872. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBekkerman R, Bilenko M, Langford J: Scaling up machine learning: Parallel and distributed approaches. Cambridge University Press, 2011. Publisher Full Text\n\nHaslam NJ, Shields DC: Profile-based short linear protein motif discovery. BMC Bioinformatics. 2012; 13(1): 104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYoon BJ: Hidden Markov Models and their Applications in Biological Sequence Analysis. Curr Genomics. 2009; 10(6): 402–415. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYang KK, Wu Z, Bedbrook CN, et al.: Learned protein embeddings for machine learning. Bioinformatics. 2018; 34(15): 2642–2648. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPandey G, Myers CL, Kumar V: Incorporating functional inter-relationships into protein function prediction algorithms. BMC Bioinformatics. 2009; 10(1): 142. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu G, Luo W, Fu G, et al.: Interspecies gene function prediction using semantic similarity. BMC Syst Biol. 2016; 10(Suppl 4): 121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang L, Shah SK, Kakadiaris IA: Hierarchical Multi-label Classification using Fully Associative Ensemble Learning. Pattern Recognit. 2017; 70: 89–103. Publisher Full Text\n\nlinhuawang: linhuawang/LargeGOPred: first release (Version 0.0.0). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1434321"
}
|
[
{
"id": "38879",
"date": "24 Oct 2018",
"name": "Guoxian Yu",
"expertise": [
"Reviewer Expertise Gene function prediction",
"Bioinformatics",
"Data mining"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper investigates the potential of heterogeneous ensembles for protein function prediction by quantitatively comparing several classical base classifiers and ensembles on them. This investigative study is interesting, innovative and informative for future study on protein function prediction. This manuscript is clearly presented, well designed and organized. This investigation can be further improved in the following aspects:\nThe used data are only Amino Acid sequences, will the results and conclusions be changed when other types of data are used and integrated? The heterogeneous ensembles are intended for heterogeneous data types. The considered GO terms (annotated to 200-300 proteins) are quite small, compared with the large GO terms space, more specific GO terms (annotated to <200 and >=10 proteins) should be tested. PFP is an imbalanced function prediction problem. Smin is another more stringent evaluation metric in CAFA, and it refers to GO hierarchy when measuring the performance. This metric should be additionally used to quantify the performance of PFP. There are some classifier ensemble based PFP solutions omitted. They should be cited and acknowledged.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "38881",
"date": "05 Nov 2018",
"name": "Predrag Radivojac",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study evaluates protein function prediction using heterogeneous ensembles. The authors collected a set of 19 organisms with functional annotations and used a complex cross-validation setup to explore the value of obtaining improved classification performance using model averaging, stacking, and previously proposed techniques by Caruana et al. They considered 277 binary classification problems, each with its own data set of positive and putatively negative genes. The base classifiers were built upon a simple 3-mer feature representation.\nOverall, this work is well presented and is clear in its exposition and contributions: there is value in developing heterogeneous ensembles though the computational cost is significant (here, an HPC solution was necessary to complete the study). Simple stacking models with logistic regression seem to be performing the best. This comes as a small surprise because one would expect nonlinear models to have an edge. On the other hand the base models were already nonlinear which might contribute to this effect.\nSoftware for this work is available which is a plus.\nSpecific comments:\n(the basis for answering one of the questions with \"partly\") Page 3, \"Data used in the study\"\nThe authors say that no electronic annotations have been used, but the majority of the evidence codes provided is in fact electronic annotation. See\nhttp://www.geneontology.org/page/guide-go-evidence-codes\nSome of the results of this work might be less realistic if the models were trained on predicted annotations. On the other hand, given the state of annotation of bacterial genomes, it is not clear whether there was an alternative. Nonetheless, this requires clarification, discussion and changes in this paragraph or perhaps elsewhere too.\n\n2. The authors refer to their previous work on the inner and outer cross-validation folds. Although I believe I understood the process, it would be useful to mention whether at any point a base classifier was trained on a particular protein and then the stacked model included that same protein in its training.\n\n3. Figure 1, lower part, ended up not being useful for me. Once we train an ensemble of base classifiers in step 3, I was confused by step 4. This seems to be some intermediate averaging that comes before stacking. This point would be good to explicitly point to the reader as it confused me at one point.\n\n4. Not a mandatory request, but it would be useful to perform a leave-one-species-out type of accuracy estimation. This might combat the problems related to sequence similarity that are discussed near the end of the paper. It would also provide evidence on what to expect from computational models when a new species is sequenced.\n\n5. The manuscript would greatly benefit from proofreading and learning up some sentence structure and language issues.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Partly\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1577
|
https://f1000research.com/articles/6-594/v1
|
28 Apr 17
|
{
"type": "Research Note",
"title": "Evidence for the oxidant-mediated amino acid conversion, a naturally occurring protein engineering process, in human cells",
"authors": [
"Yuichiro J. Suzuki",
"Jian-Jiang Hao",
"Jian-Jiang Hao"
],
"abstract": "Reactive oxygen species (ROS) play an important role in the development of various pathological conditions as well as aging. ROS oxidize DNA, proteins, lipids, and small molecules. Carbonylation is one mode of protein oxidation that occurs in response to the iron-catalyzed, hydrogen peroxide-dependent oxidation of amino acid side chains. Although carbonylated proteins are generally believed to be eliminated through proteasome-dependent degradation, we previously discovered the protein de-carbonylation mechanism, in which the formed carbonyl groups are chemically eliminated without proteins being degraded. Major amino acid residues that are susceptible to carbonylation include proline and arginine, both of which are oxidized to become glutamyl semialdehyde, which contains a carbonyl group. The further oxidation of glutamyl semialdehyde produces glutamic acid. Thus, we hypothesize that through the ROS-mediated formation of glutamyl semialdehyde, the proline, arginine, and glutamic acid residues within the protein structure are interchangeable. In support of this hypothesis, mass spectrometry demonstrated that proline 45 (a well-conserved residue within the catalytic sequence) of the peroxiredoxin 6 molecule can be converted into glutamic acid in cultured human cells, establishing a revolutionizing concept that biological oxidation elicits the naturally occurring protein engineering process.",
"keywords": [
"Amino acid",
"Glutamyl semialdehyde",
"Oxidative stress",
"Protein carbonylation",
"Protein engineering",
"Protein oxidation",
"Reactive oxygen species"
],
"content": "Introduction\n\nReactive oxygen species (ROS) are produced through the electron reduction of molecular oxygen and include superoxide anion radicals, hydrogen peroxide (H2O2), and hydroxyl radicals (Freeman & Crapo, 1982; Halliwell & Gutteridge, 2007). ROS have been implicated in the pathogenesis of various diseases (Freeman & Crapo, 1982; Halliwell & Gutteridge, 2007), as well as in the aging process (Harman, 1956). One electron reduction of molecular oxygen produces superoxide, which in turn reacts with each other to produce H2O2 and reduces cellular iron ions. Reduced iron donates an electron to H2O2 and produces highly reactive hydroxyl radicals. Hydroxyl radicals in turn react with virtually all biological molecules, including DNA, proteins, lipids and small molecules, damaging the biological system (Freeman & Crapo, 1982; Halliwell & Gutteridge, 2007).\n\nOne important event that occurs in response to the metal (iron)-catalyzed oxidation process is the formation of carbonyls in the protein structure. Protein carbonylation has been shown to be increased in various diseases and in aging (Berlett & Stadtman, 1997; Levine & Stadtman, 2001; Levine, 2002; Stadtman et al., 1988). Protein carbonylation occurs in response to the iron-catalyzed, H2O2-dependent oxidation of amino acid side chains (Stadtman, 1990; Suzuki et al., 2010). Protein carbonylation inactivates protein functions and marks damaged proteins for proteasome-dependent degradation (Grune et al., 1997; Levine, 1989). While carbonylated proteins are believed not to undergo electron reduction, we previously discovered the protein de-carbonylation mechanism, in which carbonyl groups can be eliminated without proteins being degraded (Wong et al., 2008). Major amino acid residues that are susceptible to iron-catalyzed oxidation include proline and arginine, both of which are oxidized to become glutamyl semialdehyde, which contains a carbonyl group (Amici et al., 1989). Glutamyl semialdehyde is further oxidized into glutamic acid (Figure 1).\n\nGlutamyl semialdehyde is further oxidized into glutamic acid.\n\nWe previously demonstrated the role of protein carbonylation in ligand/receptor-mediated cell signaling (Wong et al., 2008). We further noted that the kinetics of ligand-mediated protein carbonylation is transient. Typically, in cultured cells, ligands activate the carbonylation of various proteins within 10 min and the activated protein carbonylation reverts to baseline by 30 min. These results suggest that there is a mechanism for the elimination of the formed carbonyls. We named this process “de-carbonylation” (Wong et al., 2008). To understand the mechanism of de-carbonylation, we tested the hypothesis that protein carbonyls may be reduced. We found that the addition of reductants to rat heart homogenates resulted in a decrease in the protein carbonyl content (Wong et al., 2013). By contrast, reductants had no effect on the carbonyl content in purified proteins, suggesting that protein carbonyls are not reduced in the absence of other cellular components. From these results, we hypothesized that cells contain catalysts for the reduction of protein carbonyls. This hypothesis is supported by our results demonstrating that the heating of heart homogenates to inactivate cellular enzymes inhibits the decrease in protein carbonyls in vitro, and that knocking down glutaredoxin 1 in the cells inhibits protein de-carbonylation (Wong et al., 2013). We used two-dimensional gel electrophoresis and mass spectrometry to identify proteins that can be de-carbonylated and found that peroxiredoxin 6 (Prx6) is one such protein (Wong et al., 2013).\n\nSince both arginine and proline residues can be oxidized to form glutamyl semialdehyde that can further be oxidized to form glutamic acid, we speculated that arginine, proline, and glutamic acid residues may be interchangeable in the biological system, in a process that resembles site-directed mutagenesis. This article reports that the proline residue 45 of the human Prx6 protein molecule can be converted into glutamic acid in cells, indeed demonstrating the existence of a naturally occurring site-directed mutagenesis/protein engineering-like process that may be regulated by ROS.\n\n\nMethods\n\nHuman pulmonary artery smooth muscle cells (ScienCell Research Laboratories, Carlsbad, CA, USA) grown in 10 cm dishes were serum-starved overnight with 10 ml of 0.01% fetal bovine serum-containing Dulbecco’s Modified Eagle’s medium (Mediatech, Inc., Manassas, VA, USA) for cell signaling studies. To prepare lysates, the cells were washed with phosphate buffered saline and solubilized with 1 ml of 50 mM Hepes solution (pH 7.4) containing 1% (v/v) Triton X-100, 4 mM EDTA, 1 mM sodium fluoride, 0.1 mM sodium orthovanadate, 1 mM tetrasodium pyrophosphate, 2 mM PMSF, 10 µg/mL leupeptin, and 10 µg/mL aprotinin. Cell lysates (1 ml) were immunoprecipitated with the rabbit polyclonal anti-Prx6 antibody (Sigma-Aldrich, St. Louis, MO, USA; Catalogue # P0058; 5 µg) and SureBeads Protein G Magnetic Beads (Bio-Rad Bio-Rad Laboratories, Hercules, CA, USA; 1 mg) for 1 h at room temperature.\n\nImmunoprecipitation samples were processed with trypsin digestion (12.5 ng/µl) followed by a C18 Zip-tip clean-up (EMD Millipore, Billerica, MA, USA). Tryptic peptide samples were reconstituted in 20 µl of 0.1% formic acid before nanospray liquid chromatography/mass spectrometry/mass spectrometry (LC/MS/MS) analysis was performed.\n\nThe tryptic peptides mixture from each sample was analyzed using a Thermo Scientific Q-Exactive Hybrid Quadrupole-Orbitrap Mass Spectrometer (Thermo Electron, Bremen, Germany) equipped with a Thermo Dionex UltiMate 3000 RSLCnano System (Thermo Dionex, Sunnyvale, CA, USA). Tryptic peptide samples were loaded onto a peptide trap cartridge at a flow rate of 5 μl/min. The trapped peptides were eluted onto a reversed-phase 20-cm C18 PicoFrit column (New Objective, Woburn, MA, USA) using a linear gradient of acetonitrile (3–36%) in 0.1% formic acid. The elution duration was 60 min at a flow rate of 0.3 μl/min. Eluted peptides from the PicoFrit column were ionized and sprayed into the mass spectrometer using a Nanospray Flex Ion Source ES071 (Thermo Scientific, Waltham, MA, USA) under the following settings: spray voltage 1.6 kV and capillary temperature 250°C. The Q Exactive instrument was operated in the data-dependent mode to automatically switch between full scan MS and MS/MS acquisition. Survey full scan MS spectra (m/z 300−2,000) were acquired in the Orbitrap with 70,000 resolution (m/z 200) after the accumulation of ions to a 3 × 106 target value based on predictive AGC from the previous full scan. Dynamic exclusion was set to 20 s. The 15 most intense multiply charged ions (z ≥ 2) were sequentially isolated and fragmented in the Axial Higher Energy Collision-induced Dissociation (HCD) cell using normalized HCD collision energy at 25% with an AGC target of 1e5 and a maximum injection time of 100 ms at 17,500 resolution. Two independent MS analyses in triplicate (a total of six cell samples) were performed.\n\nThe raw MS files were analyzed using the Thermo Proteome Discoverer 1.4.1 platform (Thermo Scientific, Bremen, Germany) for peptide identification and protein assembly. The raw data files were searched against the human protein sequence database obtained from the NCBI website (https://www.ncbi.nlm.nih.gov) using the Proteome Discoverer software based on the SEQUEST algorithm. The carbamidomethylation of cysteines was set as a fixed modification, and Oxidation and Deamidation Q/N-deamidated (+0.98402 Da), and Pro>Glu (+31.990 Da) were set as dynamic modifications. The minimum peptide length was specified to be five amino acids. The precursor mass tolerance was set to 15 ppm, whereas fragment mass tolerance was set to 0.05 Da. The maximum false peptide discovery rate was specified as 0.01.\n\n\nResults\n\nTo identify protein carbonylation sites, we enriched Prx6 by immunoprecipitation from cultured human cells. The Prx6 immunoprecipitation samples were processed for digestion by trypsin and the tryptic peptides were analyzed by nanoLC-MS/MS analysis and protein sequence alignment to identify proline sites conversion into glutamic acid in Prx6. The conversion was identified based on a mass shift of + 31.990 Da at the proline residue (Figures 2A and B). The experiments led to the identification of one specific site at Pro 45 in human Prx6 protein (Figure 2C).\n\n(A) Extracted ion chromatograms of Prx6 peptide (DFTP+31.990VCTTELGR, +2 charge, m/z=714.33) (top) and its non-conversion counterpart (DFTPVCTTELGR, +2 charge, m/z=698.33) (bottom). Both peptides were eluted at the same retention time and are from affinity-enriched cultured human cell extract using the anti-Prx6 antibody. (B) High resolution MS spectra of the co-elution of peptides (DFTP+31.990VCTTELGR, +2 charge, m/z=714.33) (right) and its non-conversion counterpart (DFTPVCTTELGR, +2 charge, m/z=698.33) (left). (C) Illustration of the identified proline 45 conversion into glutamic acid in cultured human cells (shown in bold red). Sequence areas containing amino acid residues shown in green are detected by LC-MS/MS analysis after trypsin digestion.\n\nWe are reasonably confident that the identified mass shift of + 31.990 Da is caused by the conversion of proline into glutamic acid, since the Prx6 was affinity-purified before MS/MS analysis. Since the conversion of proline into glutamic acid in Prx6 is a novel post-translational modification identified so far, it is desirable to confirm the structure of the identified peptides to ensure that the derived mass shifts of +31.99 Da are caused by the conversion into glutamic acid. MS/MS and HPLC co-elution are gold standards for verifying peptide identification. As demonstrated in Figure 3, both peptides, DFTP+31.990VCTTELGR, +2 charge, m/z=714.33, and its non-conversion counterpart DFTPVCTTELGR, +2 charge, m/z=698.33 were co-eluted with a peak shift of less than 0.2 min. Our result showed that the high resolution MS/MS fragmentation patterns of DFTP+31.990VCTTELGR and its non-conversion counterpart DFTPVCTTELGR peptide were almost identical except the addition of +31.990 Da of fragments that contain the proline 45 residue (Figures 3A and B).\n\n(A) High resolution MS/MS spectra of peroxiredoxin 6 (Prx6) proline to glutamic acid conversion peptide (DFTP+31.990VCTTELGR). (B) High resolution MS/MS spectra of Prx6 proline 45 peptide (DFTPVCTTELGR). Spectrum was obtained by LC-MS/MS analysis using the Thermo UltiMate 3000 RSLCnano System and Q Exactive Hybrid Quadrupole-Orbitrap Mass Spectrometer. (C) % of Prx6 molecules with the proline 45 conversion into glutamic acid in cultured human cells. Two independent MS analyses in triplicate (a total of six cell samples) were performed.\n\nAnalysis of the ion intensity of the MS spectra of DFTP+31.990VCTTELGR and its non-conversion counterpart DFTPVCTTELGR peptide (Figure 3C) determined that the proline 45 to glutamic acid conversion occurs in 5–10% of the Prx6 molecule in our samples with a mean of 7.43 ± 1.78% (N=6).\n\n\nDiscussion\n\nThe present study introduces a revolutionizing concept that a protein engineering-like process could occur naturally in the biological system. Specifically, we identified that proline 45 of the Prx6 protein can be converted into glutamic acid. Proline 45 is in the peroxidase catalytic domain (Fisher, 2011; Fisher, 2017), thus this conversion should have functional significance. Future work should identify if this conversion increases, decreases or modifies the catalytic activity of Prx6. Such studies would open up the possibility that proteins with altered amino acid sequences have functional roles in the biological system.\n\nThe results from the present study also open up a new mechanism of ROS, indicating that the amino acid conversion, specifically the proline–glutamic acid conversion, is a consequence of oxidative stress mediated by the formation of glutamyl semialdehyde in the process of protein carbonylation. Through glutamyl semialdehyde, other conversions among arginine, proline, and glutamic acid are possible. Since the caged and site-directed production of hydroxyl radicals and carbonyl formation can occur via metal binding to specific sites of the protein structure (Stadtman & Berlett, 1991; Wong et al., 2010), ROS-mediated amino acid conversion may be a tightly regulated process.\n\n\nData availability\n\nThe raw MS files from the output of the LC/MS/MS are available: doi, 10.17605/OSF.IO/5FN2E and 10.17605/OSF.IO/RP9J8 (Suzuki, 2017a; Suzuki, 2017b).",
"appendix": "Author contributions\n\n\n\nYJS conceived the study and designed the experiments. JH and YJS carried out the research. JH and YJS prepared the first draft of the manuscript. Both authors were involved in the revision of the draft manuscript and have agreed on the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the National Institute on Aging and National Heart, Lung, and Blood Institute (NIH; grants R03 AG047824 and R01 HL72844, respectively) to YJS. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAmici A, Levine RL, Tsai L, et al.: Conversion of amino acid residues in proteins and amino acid homopolymers to carbonyl derivatives by metal-catalyzed oxidation reactions. J Biol Chem. 1989; 264(6): 3341–3346. PubMed Abstract\n\nBerlett BS, Stadtman ER: Protein oxidation in aging, disease, and oxidative stress. J Biol Chem. 1997; 272(33): 20313–20316. PubMed Abstract | Publisher Full Text\n\nFisher AB: Peroxiredoxin 6: a bifunctional enzyme with glutathione peroxidase and phospholipase A2 activities. Antioxid Redox Signal. 2011; 15(3): 831–844. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFisher AB: Peroxiredoxin 6 in the repair of peroxidized cell membranes and cell signaling. Arch Biochem Biophys. 2017; 617: 68–83. PubMed Abstract | Publisher Full Text\n\nFreeman BA, Crapo JD: Biology of disease: free radicals and tissue injury. Lab Invest. 1982; 47(5): 412–426. PubMed Abstract\n\nGrune T, Reinheckel T, Davies KJ: Degradation of oxidized proteins in mammalian cells. FASEB J. 1997; 11(7): 526–534. PubMed Abstract\n\nHalliwell B, Gutteridge J, eds: Free Radicals in Biology and Medicine. Oxford: Oxford University Press, 2007. Reference Source\n\nHarman D: Aging: a theory based on free radical and radiation chemistry. J Gerontol. 1956; 11(3): 298–300. PubMed Abstract | Publisher Full Text\n\nLevine RL: Proteolysis induced by metal-catalyzed oxidation. Revis Biol Celular. 1989; 21: 347–360. PubMed Abstract\n\nLevine RL, Stadtman ER: Oxidative modification of proteins during aging. Exp Gerontol. 2001; 36(9): 1495–1502. PubMed Abstract | Publisher Full Text\n\nLevine RL: Carbonyl modified proteins in cellular regulation, aging, and disease. Free Radic Biol Med. 2002; 32(9): 790–796. PubMed Abstract | Publisher Full Text\n\nStadtman ER: Metal ion-catalyzed oxidation of proteins: biochemical mechanism and biological consequences. Free Radic Biol Med. 1990; 9(4): 315–325. PubMed Abstract | Publisher Full Text\n\nStadtman ER, Berlett BS: Fenton chemistry. Amino acid oxidation. J Biol Chem. 1991; 266(26): 17201–17211. PubMed Abstract\n\nStadtman ER, Oliver CN, Levine RL, et al.: Implication of protein oxidation in protein turnover, aging, and oxygen toxicity. Basic Life Sci. 1988; 49: 331–339. PubMed Abstract | Publisher Full Text\n\nSuzuki Y: Raw MS Files (F1000Res April, 2017). Open Science Framework. 2017a. Data Source\n\nSuzuki Y: Raw MS Files (F1000Res April, 2017)#2. Open Science Framework. 2017b. Data Source\n\nSuzuki YJ, Carini M, Butterfield DA: Protein Carbonylation. Antioxid Redox Signal. 2010; 12(3): 323–325. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWong CM, Cheema AK, Zhang L, et al.: Protein carbonylation as a novel mechanism in redox signaling. Circ Res. 2008; 102(3): 310–318. PubMed Abstract | Publisher Full Text\n\nWong CM, Marcocci L, Das D, et al.: Mechanism of protein decarbonylation. Free Radic Biol Med. 2013; 65: 1126–1133. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWong CM, Marcocci L, Liu L, et al.: Cell signaling by protein carbonylation and decarbonylation. Antiox Redox Signal. 2010; 12(3): 393–404. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23475",
"date": "14 Jun 2017",
"name": "Joaquim Ros",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper submitted by Suzuki and Hao highlights the importance of modifications occurring in proteins as a consequence of oxidative stress. In this particular case, the authors provide data showing that proline residue at position 45 in peroxiredoxin 6 can be converted into glutamic acid residues through glutamyl semialdehyde. The results shown in the paper are well designed, solved and clear. From technical point of view, the approach is precise and gives the information necessary to draw the conclusions. Nevertheless there are some minor details that this reviewer consider that should be added or corrected in the text:\nThe conversion of proline to glutamic semialdehyde is already known. Since the results show that the carbonyl group is further oxidized to glutamic acid, the authors should add a sentence about how they believe this last step of oxidation occurs. The authors show that P45 is transformed to E (provided a mass increase of 31,990 daltons). Did the authors check (or find) the intermediate form –the glutamic semialdehyde- and if so, to what extent this intermediate is further oxidized to glutamic acid? A brief sentence should be added to the text if they have these data. There is an exciting idea concerning the concept of “naturally occurring protein engineering”. Being this true, do the authors believe that this could be a motor for evolution? Could they add a short comment on that? Finally, I disagree with the use of “interchangeable” in the text. This would induce to think that a protein could have a P or a E or a R in a given position without compromising its function. It is hard to believe that changing an E for an R would result in a neutral consequence. Since the consequences of such change (increase or decrease activity, stability,…) has not been proved in the case of Prx6, it seems reasonable that the term should be removed and simply say that this could be a driving force for evolution, for instance (as suggested above). In the “Discussion” I would suggest the authors to change the sentence starting with “.Such studies would open up…”. I think they can really say “…studies will open up…” and change “…have functional roles…” for “can acquire new functional roles…”. Do the authors agree?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "4006",
"date": "28 Sep 2018",
"name": "Yuichiro Suzuki",
"role": "Author Response",
"response": "Reviewer 1 The paper submitted by Suzuki and Hao highlights the importance of modifications occurring in proteins as a consequence of oxidative stress. In this particular case, the authors provide data showing that proline residue at position 45 in peroxiredoxin 6 can be converted into glutamic acid residues through glutamyl semialdehyde. The results shown in the paper are well designed, solved and clear. From technical point of view, the approach is precise and gives the information necessary to draw the conclusions. Nevertheless there are some minor details that this reviewer consider that should be added or corrected in the text: 1. The conversion of proline to glutamic semialdehyde is already known. Since the results show that the carbonyl group is further oxidized to glutamic acid, the authors should add a sentence about how they believe this last step of oxidation occurs. [RESPONSE: The conversion of free proline to free glutamic acid through the formation of free glutamic semialdehyde has been reported to occur. While it is not yet known whether the protein proline residue conversion to protein glutamic acid residue through the formation of glutamic semialdehyde within the protein molecule utilize the same mechanism, we have included a discussion of these published papers on free amino acids in the new version.] 2. The authors show that P45 is transformed to E (provided a mass increase of 31,990 daltons). Did the authors check (or find) the intermediate form –the glutamic semialdehyde- and if so, to what extent this intermediate is further oxidized to glutamic acid? A brief sentence should be added to the text if they have these data. [RESPONSE: The reviewer makes a very important point. We did check, but in these particular samples from the present study, we only detected some proline 45 to be a structure that is consistent with glutamic acid, but not glutamic semialdehyde. Further work is needed to define the nature of protein carbonylation processes in the biological system.] 3. There is an exciting idea concerning the concept of “naturally occurring protein engineering”. Being this true, do the authors believe that this could be a motor for evolution? Could they add a short comment on that? [RESPONSE: While further work is needed to prove the occurrence of protein amino acid conversion in the biological system, the present study provided data that is consistent with this concept. If it were true that posttranslational modification mechanisms can convert one type of amino acid to the other, this would imply that the DNA sequences are not the sole determinant of protein sequences. To ensure that our observations of the presumed occurrence of proline-glutamic acid conversion is not due to mutation of DNA, we treated cells with hydrogen peroxide for 10 min. This short treatment, during which gene transcription and translation processes should not be completed, caused a robust modification of proline 45, confirming that this event is post-translationally regulated. This new data has been included in Fig. 4 of the new version.] 4. Finally, I disagree with the use of “interchangeable” in the text. This would induce to think that a protein could have a P or a E or a R in a given position without compromising its function. It is hard to believe that changing an E for an R would result in a neutral consequence. Since the consequences of such change (increase or decrease activity, stability,…) has not been proved in the case of Prx6, it seems reasonable that the term should be removed and simply say that this could be a driving force for evolution, for instance (as suggested above). [RESPONSE: We do mean that P, E and R can be in a given position within the protein structure. We, however, do not imply that this would not alter the function. Our theory is that such alterations would make the protein with altered functions, contributing to the diverse nature of the biological mechanisms. In fact, we performed experiments to mutate Pro45 to Glu and found that the redox interactions between peroxiredoxin 6 and hydrogen peroxide was altered. We have included this new data in Fig. 5 of the new version. We have also deleted the term “interchangeable”. ] In the “Discussion” I would suggest the authors to change the sentence starting with “.Such studies would open up…”. I think they can really say “…studies will open up…” and change “…have functional roles…” for “can acquire new functional roles…”. Do the authors agree? [RESPONSE: This has been modified in the new version.]"
}
]
},
{
"id": "23628",
"date": "20 Jun 2017",
"name": "Dolores Pérez-Sala",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn their manuscript, Suzuki and Hao report the finding of a peptide in Peroxiredoxin 6 that shows a mass increment of 32 in mass spectrometry analysis. NanoLC-MSMS analysis maps this increment at the site of a proline residue (P45 in the protein). This mass increment is found to affect approximately 7% of the peroxiredoxin 6 protein present in the samples. In view of these results the authors interpret that proline has suffered an oxidative modification leading to its conversion in glutamic acid. Our impression is that the information provided is not sufficient to establish this point. The mass increment of 32 Da could also be due to dihydroxylation of proline, which is a known posttranslational modification. Please see: http://web.expasy.org/findmod/findmod_masses.html\n\nTherefore, additional experimental evidence will be required to confirm the authors’ conclusion. Specifically, we would suggest several of the following approaches: -Synthetize both the peptide with proline and with glutamic acid -Analyze the two peptides by HPLC. If they separate, do the same type of analysis with the peptides from their samples -Attempt to oxidize the proline-containing peptide in vitro (or the intact protein) to monitor the changes in proline -Perform amino acid analysis to confirm the presence of glutamic acid -Employ other derivatization or detection strategies to confirm the presence of glutamic acid.\n\nIdeally, the modification described could be explored in other cell types under different oxidative conditions.\n\nIf the authors cannot obtain an unequivocal confirmation of the presence of glutamic acid, the title of the manuscript and the main interpretations should be changed.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Partly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "4004",
"date": "28 Sep 2018",
"name": "Yuichiro Suzuki",
"role": "Author Response",
"response": "Reviewer 2 In their manuscript, Suzuki and Hao report the finding of a peptide in Peroxiredoxin 6 that shows a mass increment of 32 in mass spectrometry analysis. NanoLC-MSMS analysis maps this increment at the site of a proline residue (P45 in the protein). This mass increment is found to affect approximately 7% of the peroxiredoxin 6 protein present in the samples. In view of these results the authors interpret that proline has suffered an oxidative modification leading to its conversion in glutamic acid. Our impression is that the information provided is not sufficient to establish this point. The mass increment of 32 Da could also be due to dihydroxylation of proline, which is a known posttranslational modification. Please see: http://web.expasy.org/findmod/findmod_masses.html Therefore, additional experimental evidence will be required to confirm the authors’ conclusion. Specifically, we would suggest several of the following approaches: -Synthetize both the peptide with proline and with glutamic acid -Analyze the two peptides by HPLC. If they separate, do the same type of analysis with the peptides from their samples -Attempt to oxidize the proline-containing peptide in vitro (or the intact protein) to monitor the changes in proline -Perform amino acid analysis to confirm the presence of glutamic acid -Employ other derivatization or detection strategies to confirm the presence of glutamic acid. [RESPONSE: We thank the reviewer for pointing out that the observed mass shift could be due to the conversion of the proline residue to dihydroxyproline. We will modified the manuscript, so that it is clear that, while we have provided data that is consistent with the idea of the proline-to-glutamic acid conversion, the present study has not proven this as it is also possible that Proline 45 is dihydroxylated. While the reviewer’s suggestion is excellent for purified proteins, our thesis is that this protein amino acid conversion is driven by biological factors, thus we need to prove this in the cell systems. We will try to find alternative avenues to prove this concept in the biological system and hope to publish such results in the future papers.] Ideally, the modification described could be explored in other cell types under different oxidative conditions. [RESPONSE: The reviewer is correct. Indeed, we are currently studying various systems including other cell types as well as tissues for patients. We hope to publish these results in the future papers.] If the authors cannot obtain an unequivocal confirmation of the presence of glutamic acid, the title of the manuscript and the main interpretations should be changed. [RESPONSE: In the new version, we have changed the title to “Results supporting the concept of the oxidant-mediated protein amino acid conversion, a naturally occurring protein engineering process, in human cells” and modified the text to make it clear that, while the present study obtained results that are consistent with our hypothesis, further work is needed to prove this concept.]"
}
]
},
{
"id": "23077",
"date": "28 Jun 2017",
"name": "Adelina Rogowska-Wrzesinska",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral overview:\nThis manuscript presents an interesting aspect of the effect of ROS on proteins – the possibility of converting one type of amino acid into another one. It briefly describes the idea and presents results of a single mass spectrometry experiment that identifies two forms of a peptide obtained by trypsin digestion of Prx6 protein. One form contains proline residue and the other form contains modified form of the proline residue. The modification mass is +31.990 Da and based on that the authors conclude that oxidative stress can lead to carbonylation of proline and its further conversion to glutamic acid. No evidence is provided to proof the link between protein oxidation, ROS and the conversion of proline to glutamic acid.\n\nDetailed comments:\nAbstract. Modified proteins are degraded by multiple enzymatic mechanisms not just the proteasome. Well established roles for lysosomes, LON protease and other proteases have been demonstrated.\n\nThe authors suggest that the Pro and Arg conversion to Glu is “interchangeable”. This statement suggest reversibility of the process, which is clearly not the case – it is a one-way reaction.\n\nThe conversion of one amino acid into another via oxidative reactions is definitely not a “revolutionizing concept”. It is very well established that His is converted to Asn and Asp, that Trp (and also other amino acids) can be converted to Gly (via side-chain elimination reactions), that Cys can be converted to Ala.\n\nIntroduction section is written with a focus on authors own work and its relevance to this work is actually not 100% clear. At the same time a lot of information is missing: Have similar processes been observed before (in vitro and/or in vivo)? Can other amino acids undergo similar conversion processes? What is the state-of-art in this field?\n\nCarbonylation is not limited to Pro and Arg ! Unfortunately this is not clear from the abstract or the introduction.\n\nMethods are described in a very short form and a number of details are missing e.g. conditions for immunoprecipitation or protein digestion, amount of starting material and material used for LC-MS analysis. RAW files and processed files should be submitted to MS data repository like for example PRIDE archive.\n\nIt is not clear at all why the cells where starved prior the experiment and how this is linked to oxidative stress and protein carbonylation. No comparison to non-starved cells had been made.\n\nIt is unclear from the text that the Cys in the DFTPVCTTELGR peptide is modified. Although not stated in the methods section it seems that the samples have been reduced and alkylated because the Proteome Discoverer search parameters included carbamidomethylation of cysteines as a fixed modification.\n\n+31.990 is the modification mass of proline residue. Are there any other types of post translational modifications that would result in a similar mass change?\n\nAre there any other modifications present in Prx6?\n\nThe MS/MS spectra in figure 3 is the only evidence of the Pro to Glu conversion. Is this the only spectra that have been observed? According to methods section six samples have been analysed by LC-MS. How many times this peptide was fragmented in each sample?\nPresentation of multiple spectra would increase the credibility of the observation. Additionally the quality of the figure is not very high and it is very difficult to read the masses of the ions present in the spectra. Therefore again submitting the results to a MS data repository would help to validate the quality of the obtained results.\n\nThe basis of this selective oxidation is not addressed.\n\nIt is unclear why the authors do not test the functional significance of this modification, if they have already purified the material – it is not a very difficult assay.\n\nAdditional experiments where cells are collected at different conditions involving oxidative stress would help to provide the link between the carbonylation and conversion of Pro to Glu. For the moment is in not clear if this conversion is driven by oxidative stress or another unknown process.\n\nThe authors should not quote amino acid conversion levels based on ion intensities to 2 decimal places (7.43 +/- 1.78%). Unlikely to be this accurate.\n\nThe comments in the Discussion about “altered amino acids having functional roles” betrays a lack of knowledge of the protein oxidation field – this is very well established (e.g. all the work on oxidised Cys residues).\n\nMany of the references cited are rather old. The most recent publications is authors own work. The field has moved on since many of these works were published.\n\nThe authors final statement “ROS-mediated amino acid conversion may be a tightly regulated process” should be tempered (or completely omitted). “ROS” is a very generic term, and the vast majority of oxidants do not show marked residue and site specificity. They are not “tightly regulated” in the vast majority of cases.\n\nNo discussion on the limitations of the presented results and conclusions is given.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "4003",
"date": "28 Sep 2018",
"name": "Yuichiro Suzuki",
"role": "Author Response",
"response": "Reviewer 1 The paper submitted by Suzuki and Hao highlights the importance of modifications occurring in proteins as a consequence of oxidative stress. In this particular case, the authors provide data showing that proline residue at position 45 in peroxiredoxin 6 can be converted into glutamic acid residues through glutamyl semialdehyde. The results shown in the paper are well designed, solved and clear. From technical point of view, the approach is precise and gives the information necessary to draw the conclusions. Nevertheless there are some minor details that this reviewer consider that should be added or corrected in the text: 1. The conversion of proline to glutamic semialdehyde is already known. Since the results show that the carbonyl group is further oxidized to glutamic acid, the authors should add a sentence about how they believe this last step of oxidation occurs. [RESPONSE: The conversion of free proline to free glutamic acid through the formation of free glutamic semialdehyde has been reported to occur. While it is not yet known whether the protein proline residue conversion to protein glutamic acid residue through the formation of glutamic semialdehyde within the protein molecule utilize the same mechanism, we have included a discussion of these published papers on free amino acids in the new version.] 2. The authors show that P45 is transformed to E (provided a mass increase of 31,990 daltons). Did the authors check (or find) the intermediate form –the glutamic semialdehyde- and if so, to what extent this intermediate is further oxidized to glutamic acid? A brief sentence should be added to the text if they have these data. [RESPONSE: The reviewer makes a very important point. We did check, but in these particular samples from the present study, we only detected some proline 45 to be a structure that is consistent with glutamic acid, but not glutamic semialdehyde. Further work is needed to define the nature of protein carbonylation processes in the biological system.] 3. There is an exciting idea concerning the concept of “naturally occurring protein engineering”. Being this true, do the authors believe that this could be a motor for evolution? Could they add a short comment on that? [RESPONSE: While further work is needed to prove the occurrence of protein amino acid conversion in the biological system, the present study provided data that is consistent with this concept. If it were true that posttranslational modification mechanisms can convert one type of amino acid to the other, this would imply that the DNA sequences are not the sole determinant of protein sequences. To ensure that our observations of the presumed occurrence of proline-glutamic acid conversion is not due to mutation of DNA, we treated cells with hydrogen peroxide for 10 min. This short treatment, during which gene transcription and translation processes should not be completed, caused a robust modification of proline 45, confirming that this event is post-translationally regulated. This new data has been included in Fig. 4 of the new version.] 4. Finally, I disagree with the use of “interchangeable” in the text. This would induce to think that a protein could have a P or a E or a R in a given position without compromising its function. It is hard to believe that changing an E for an R would result in a neutral consequence. Since the consequences of such change (increase or decrease activity, stability,…) has not been proved in the case of Prx6, it seems reasonable that the term should be removed and simply say that this could be a driving force for evolution, for instance (as suggested above). [RESPONSE: We do mean that P, E and R can be in a given position within the protein structure. We, however, do not imply that this would not alter the function. Our theory is that such alterations would make the protein with altered functions, contributing to the diverse nature of the biological mechanisms. In fact, we performed experiments to mutate Pro45 to Glu and found that the redox interactions between peroxiredoxin 6 and hydrogen peroxide was altered. We have included this new data in Fig. 5 of the new version. We have also deleted the term “interchangeable”.] In the “Discussion” I would suggest the authors to change the sentence starting with “.Such studies would open up…”. I think they can really say “…studies will open up…” and change “…have functional roles…” for “can acquire new functional roles…”. Do the authors agree? [RESPONSE: This has been modified in the new version.]"
},
{
"c_id": "4005",
"date": "28 Sep 2018",
"name": "Yuichiro Suzuki",
"role": "Author Response",
"response": "Reviewer 3 General overview: This manuscript presents an interesting aspect of the effect of ROS on proteins – the possibility of converting one type of amino acid into another one. It briefly describes the idea and presents results of a single mass spectrometry experiment that identifies two forms of a peptide obtained by trypsin digestion of Prx6 protein. One form contains proline residue and the other form contains modified form of the proline residue. The modification mass is +31.990 Da and based on that the authors conclude that oxidative stress can lead to carbonylation of proline and its further conversion to glutamic acid. No evidence is provided to proof the link between protein oxidation, ROS and the conversion of proline to glutamic acid. Detailed comments: Abstract. Modified proteins are degraded by multiple enzymatic mechanisms not just the proteasome. Well established roles for lysosomes, LON protease and other proteases have been demonstrated. [RESPONSE: In the new version, we have modified this statement in the Abstract and Introduction sections.] The authors suggest that the Pro and Arg conversion to Glu is “interchangeable”. This statement suggest reversibility of the process, which is clearly not the case – it is a one-way reaction. [RESPONSE: In the new version, we have deleted the term “interchangeable”.] The conversion of one amino acid into another via oxidative reactions is definitely not a “revolutionizing concept”. It is very well established that His is converted to Asn and Asp, that Trp (and also other amino acids) can be converted to Gly (via side-chain elimination reactions), that Cys can be converted to Ala. [RESPONSE: The reviewer is correct that conversions of free amino acids are well known, but not those of protein amino acid residues.] Introduction section is written with a focus on authors own work and its relevance to this work is actually not 100% clear. At the same time a lot of information is missing: Have similar processes been observed before (in vitro and/or in vivo)? Can other amino acids undergo similar conversion processes? What is the state-of-art in this field? [RESPONSE: The concept of oxidant-mediated protein amino acid conversion was generated while we were studying the role of protein carbonylation in the mechanism of redox signaling. We discovered that formed protein carbonyls can be decarbonylated in the biological system, but not in purified proteins. These studies generated the concept that, in the biological system, the formation of glutamic semialdehyde from protein proline residues is reversible. This generated the idea that protein arginine residues can become protein proline residues. We initially tested this hypothesis, however, we have not yet come across this event. Instead, as described in this paper, we generated data that are consistent with the occurrence of proline-to-glutamic acid conversion that is also a process of oxidant-mediated protein amino acid conversion.] Carbonylation is not limited to Pro and Arg ! Unfortunately this is not clear from the abstract or the introduction. [RESPONSE: In the new version, we have modified the text to make this clearer in the Introduction section.] Methods are described in a very short form and a number of details are missing e.g. conditions for immunoprecipitation or protein digestion, amount of starting material and material used for LC-MS analysis. RAW files and processed files should be submitted to MS data repository like for example PRIDE archive. [RESPONSE: In the new version, we have expanded the Methods section. The raw MS files are publicly available as described in Data availability section.] It is not clear at all why the cells where starved prior the experiment and how this is linked to oxidative stress and protein carbonylation. No comparison to non-starved cells had been made. [RESPONSE: We detected this modification in both starved cells and in non-starved cells. In the new version, we have made this statement.] It is unclear from the text that the Cys in the DFTPVCTTELGR peptide is modified. Although not stated in the methods section it seems that the samples have been reduced and alkylated because the Proteome Discoverer search parameters included carbamidomethylation of cysteines as a fixed modification. [RESPONSE: The reviewer is correct that the samples are reduced for MS analysis, thus we cannot analyze cysteine redox status using this approach. In the new version, we have added this information in the Methods section.] +31.990 is the modification mass of proline residue. Are there any other types of post translational modifications that would result in a similar mass change? [RESPONSE: As pointed by Reviewer #2, this mass shift could also be due to the formation of dihydroxylated proline. The new version of the manuscript have been modified, so that it is clear that this work presents data that are consistent with the idea of proline-to-glutamic acid conversion, but further work is needed to prove this concept.] Are there any other modifications present in Prx6? [RESPONSE: There are a number of other modifications. In the present work, we focused on providing evidence for protein amino acid conversions.] The MS/MS spectra in figure 3 is the only evidence of the Pro to Glu conversion. Is this the only spectra that have been observed? According to methods section six samples have been analysed by LC-MS. How many times this peptide was fragmented in each sample? [RESPONSE: Experiments have been performed more than 6 times and results have been reproducible.] Presentation of multiple spectra would increase the credibility of the observation. Additionally the quality of the figure is not very high and it is very difficult to read the masses of the ions present in the spectra. Therefore again submitting the results to a MS data repository would help to validate the quality of the obtained results. [RESPONSE: The raw MS files are publicly available as described in Data availability section.] The basis of this selective oxidation is not addressed. [RESPONSE: If the concept of oxidant-mediated protein amino acid conversion is true, this defines that the DNA sequences are not the sole determinant of protein sequences, opening up a completely new concept of biology. We have stated in the new version in the Discussion section.] It is unclear why the authors do not test the functional significance of this modification, if they have already purified the material – it is not a very difficult assay. [RESPONSE: In the new version, we have included new data showing the functional consequence of proline 45 to glutamine acid conversion in Fig. 5.] Additional experiments where cells are collected at different conditions involving oxidative stress would help to provide the link between the carbonylation and conversion of Pro to Glu. For the moment is in not clear if this conversion is driven by oxidative stress or another unknown process. [RESPONSE: In the new version, we have included new data showing that the treatment of cells with hydrogen peroxide for 10 min drives this modification in Fig. 4.] The authors should not quote amino acid conversion levels based on ion intensities to 2 decimal places (7.43 +/- 1.78%). Unlikely to be this accurate. [RESPONSE: In the new version, we have modified these.] The comments in the Discussion about “altered amino acids having functional roles” betrays a lack of knowledge of the protein oxidation field – this is very well established (e.g. all the work on oxidised Cys residues). [RESPONSE: We are strictly talking about the oxidant-mediated protein amino acid conversion process.] Many of the references cited are rather old. The most recent publications is authors own work. The field has moved on since many of these works were published. [RESPONSE: In the new version, we have added more recent references.] The authors final statement “ROS-mediated amino acid conversion may be a tightly regulated process” should be tempered (or completely omitted). “ROS” is a very generic term, and the vast majority of oxidants do not show marked residue and site specificity. They are not “tightly regulated” in the vast majority of cases. [RESPONSE: The reviewer is correct that ROS in general may not confer specificity. However, in the case of the oxidant-mediated protein amino acid conversion, such specificity may possibly regulate this process. In accordance with the reviewer’s comment, in the new version, we have deleted the term “tightly regulated”.] No discussion on the limitations of the presented results and conclusions is given. [RESPONSE: In the new version, we have added a discussion on the limitation of the present study.]"
}
]
}
] | 1
|
https://f1000research.com/articles/6-594
|
https://f1000research.com/articles/7-1574/v1
|
28 Sep 18
|
{
"type": "Research Article",
"title": "Pyrolytic formation and photoactivity of reactive oxygen species in a SiO2/carbon nanocomposite from kraft lignin",
"authors": [
"Dhanalakshmi Vadivel",
"Ilanchelian Malaichamy",
"Dhanalakshmi Vadivel"
],
"abstract": "SiO2 and carbon produced by kraft lignin pyrolyzed at 600°C can generate stable reactive oxygen species (ROS) by reaction with atmospheric oxygen. In this study, we systematically investigate the photochemistry of peroxyl radicals in carbon-supported silica (PCS) and assess its effects on the methylene blue (MB) photodegradation. Characterization revealed that the higher ROS generation ability of SiO2/carbon under UV light irradiation was attributed to its abundant photoactive surface-oxygenated functional groups.",
"keywords": [
"ROS",
"photochemistry",
"methylene blue",
"degradation",
"UV"
],
"content": "Introduction\n\nConsistent access to clean water has come into focus this millennium due to high pollution; a reduced amount of drinkable water could be the next challenge for the future due to overpopulation1–3. The application of photocatalytic technology using semiconductors to solve the environmental problems, like the degradation of organic effluents have been received much attention4–8. Heterogeneous photocatalysis using semiconductors is an interesting method falling into advance oxidation processes (AOPs)9–11 that can produce highly reactive species containing oxygen (ROS). In fact, with this method is possible to produce oxidizing molecules like hydrogen peroxide and singlet oxygen (1O2) together with radicals like hydroxyl radical (OH.) and superoxide radical anion ( O2.- )12–13. These reactants can decompose organic pollutants in wastewater giving harmless compounds14.\n\nRecently, N. Chen et al. reported that reactive oxygen species generation in hydrochar and photochemistry of Sulfadimidine degradation in water15. Y. Chen et al. reported the photo degradation of tetracycline in aqueous solution under simulated sunlight irradiation through the singlet oxygen16. Li et al. reported that the degradation of ibuprofen by UV–visible light irradiation included direct photolysis and self-sensitization via ROS17. Wang et al. reported that when a simpler molecule without visible-light absorption is degraded, the Fe-hydroxyl complexes still promote the generation of ROS and thus accelerate degradation, although the pathway of electron transfer, and the mechanism of photocatalysis was not completely understood18.\n\nIn literature are present many methods for photoassisted AOPs like photo-electrochemical cells composed by an anode made with boron-doped diamond and cathode in carbon nanotubes; with this system, a model azo dye was depleted19. Also exfoliated graphene, decorated with titanium dioxide and nanoparticles, is effective for photo-catalytic water treatment20,21.\n\nIn our current scenario, stable peroxyl radicals in carbon-supported silica (PCS) are prepared from cheap starting materials. The method used is the pyrolysis under vacuum of kraft lignin deposited onto silica. Vacuum pyrolysis produced defective carbon bearing carbon radicals. These radicals are quickly transformed into peroxyl radicals by reaction with oxygen molecules present in the atmosphere.\n\n\nMethods\n\nThe materials and methods to produce PCS using high-vacuum pyrolysis are clearly explained and characterized previously22. In brief, kraft lignin was absorbed onto silica and pyrolyzed under vacuum at 600 °C. For the kinetic data analysis, linear quadratic fitting and other kinetic fitting (reaction order checking) were performed by using Origin v6.0.\n\n100-ml of air-equilibrated 10-6 M solutions of MB (Sigma Aldrich, India) in water containing 100 mg (1 mg/ml) of neat SiO2 or PCS were poured in quartz cylindrical reactors (90 mm diameter x 25 mm height). Solutions were magnetically stirred in the dark for 10 min before irradiation and kept under stirring during the experiment. The light source consisted of two 15-W phosphor-coated lamps (center of emission, 366 nm). Aliquots (4 ml) were withdrawn at 5-min intervals (for a total of 10-12 samples) during the irradiation until the disappearance of the color. Solids were removed by syringe filtration with a 0.4-µm pore size, and the filtrates immediately examined by UV-visible absorption spectroscopy in 1-cm quartz cuvettes using a JASCO V-630 UV-visible spectrophotometer. The absorbance was normalized by dividing the absorbance at 668 nm of the sample (A) with the absorbance of the initial solution (A0).\n\n\nResults and discussion\n\nTo assess the respective photocatalytic activity of PCS and of neat SiO2, we carried out competitive experiments with MB (Figure 1). PCS did not react with MB, in fact, solutions left for 24 hours in the dark does not show a decrease of MB concentration. Nonetheless, under dark conditions the dye was absorbed by PCS to a nearly tenfold greater extent than with pristine SiO2 (dark region between −10 and 0 min, Figure 1b).\n\nNormalized spectral intensity of the 668 nm band of methylene blue (MB) during (a) the UV-irradiation of the MB/SiO2 suspension at 366 nm at different time intervals, and (b) the same process for the MB/peroxyl radicals in carbon-supported silica (PCS) suspensions under otherwise identical conditions. The region between −10 and 0 min refers to the extent of adsorption of the MB dye under dark conditions. It shows the first-order kinetics of the photodegradation of the MB dye by MB/PCS. 3 repeats performed.\n\nNormally photocatalysts produce radicals able to degrade organics but in the case of PCS the catalyst already possesses reactive radicals.\n\nThe net effect of PCS on the photodegradation of MB is a threefold increase in the kinetics of photodegradation (Table 1). Without the assistance of an active photocatalyst, the only reaction mechanism that is applicable is the generation of singlet oxygen by sensitization (Equation 2) via the excited state of the dye. The singlet oxygen can react with MB, giving rise to photobleaching (Equation 3).\n\nDye + photon = Dye* (1)\n\nDye* + O2T =Dye + O2s (2)\n\nO2S + Dye = oxidation products (3)\n\nWith PCS, MB is strongly absorbed onto the pyrolytic carbon present on the catalyst surface. Moreover, pyrolytic carbon possesses a high concentration of peroxyl radicals. The enhancement on the reaction kinetic could be due to a local increase of concentration of dye and active oxygen. Since the oxygen is reversibly absorbed on the carbon giving peroxyl radicals22, the surface of the catalyst is never depleted due to the presence of oxygen in solution.\n\nIn fact, in these conditions, we can have, together with Equation 1–Equation 3, a possible reaction of the excited state of the reactant with peroxyl radicals or adsorbed oxygen on PCS (Equation 4).\n\nDye* + PCS-OO = PCS + dye oxidation (4)\n\nThe peroxyl radicals are reversibly formed by capture of atmospheric oxygen due to the presence of highly active pyrolytic carbon on PCS:\n\nPCS + O2 = PCS-OO (5)\n\nAnother possibility is the transfer of energy (or sensitization) of the excited state of the absorbed dye directly to the defective pyrolytic carbon, giving rise to formation of ROS. All these mechanism lead to an enhancement on the degradation of MB.\n\n\nConclusion\n\nThis study has shown that silica can be coated successfully with pyrolytic carbon obtained from inexpensive waste materials, such as kraft lignin and silica. The pyrolytic process performed at 600°C did not affect the crystalline state of silica when it was coated with carbon. The photocatalytic activity was measured against pristine SiO2 through an examination of the kinetics of degradation of MB by UV-vis spectroscopy. Under UV light irradiation, the degradation was threefold greater for the MB-PCS compared with MB-silica.\n\n\nData availability\n\nDataset 1: Raw data for the article ‘Pyrolytic formation and photoactivity of reactive oxygen species in a SiO2/carbon nanocomposite from kraft lignin’ are presented, 10.5256/f1000research.16080.d21890723",
"appendix": "Grant information\n\nWe are grateful to the PANACEA - ERASMUS MUNDUS of the European Commission within the project Agreement Number 2012-2647/001-001 - EMA2 for an Action 2 scholarship in support of D.V.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe wish to thank Prof. Nick Serpone of the PhotoGreen Laboratory of the Department of Chemistry at the University of Pavia for useful discussions.\n\n\nReferences\n\nVinu R, Madras G: Environmental remediation by photocatalys. J Indian Inst Sci. 2010; 90(2): 189–230. Reference Source\n\nLachheb H, Puzenat E, Houas A, et al.: Photocatalytic degradation of various types of dyes (Alizarin S, Crocein Orange G, Methyl Red, Congo Red, Methylene Blue) in water by UV-irradiated titania. Appl Catal B. 2002; 39(1): 75–90. Publisher Full Text\n\nQu X, Alvarez PJ, Li Q: Applications of nanotechnology in water and wastewater treatment. Water Res. 2013; 47(12): 3931–3946. PubMed Abstract | Publisher Full Text\n\nRodrigues S, Ranjit KT, Uma S, et al.: Single-step synthesis of a highly active visible-light photocatalyst for oxidation of a common indoor air pollutant: acetaldehyde. Adv Mater. 2005; 17(20): 2467–2471. Publisher Full Text\n\nKisch H, Macyk W: Visible-light photocatalysis by modified titania. Chem Phys Chem. 2002; 3(5): 399–400. PubMed Abstract | Publisher Full Text\n\nDavis AP, Green DL: Photocatalytic oxidation of cadmium-EDTA with titanium dioxide. Environ Sci Technol. 1999; 33(4): 609–617. Publisher Full Text\n\nChoi H, Sofranko AC, Dionysiou DD: Nanocrystalline TiO2 photocatalytic membranes with a hierarchical mesoporous multilayer structure: synthesis, characterization, and multifunction. Adv Funct Mater. 2006; 16(8): 1067–1074. Publisher Full Text\n\nEl-Bahy ZM, Ismail AA, Mohamed RM: Enhancement of titania by doping rare earth for photodegradation of organic dye (Direct Blue). J Hazard Mater. 2009; 166(1): 138–143. PubMed Abstract | Publisher Full Text\n\nSaquib M, Muneer M: TiO2-mediated photocatalytic degradation of a triphenylmethane dye (gentian violet), in aqueous suspensions. Dyes Pigments. 2003; 56(1): 37–49. Publisher Full Text\n\nMuruganandham M, Swaminathan M: Solar photocatalytic degradation of a reactive azo dye in TiO2-suspension. Sol Energy Mater Sol Cells. 2004; 81(4): 439–457. Publisher Full Text\n\nKaur S, Singh V: Visible light induced sonophotocatalytic degradation of Reactive Red dye 198 using dye sensitized TiO2. Ultrason Sonochem. 2007; 14(5): 531–537. PubMed Abstract | Publisher Full Text\n\nInce NH, Tezcanli G, Belen RK, et al.: Ultrasound as a catalyzer of aqueous reaction systems: the state of the art and environmental applications. Appl Catal B. 2001; 29(3): 167–176. Publisher Full Text\n\nInce NH, Tezcanli G: Reactive dyestuff degradation by combined sonolysis and ozonation. Dyes Pigments. 2001; 49(3): 145–153. Publisher Full Text\n\nWang J, Zhang YY, Guo Y, et al.: Interaction of bovine serum albumin with Acridine Orange (C.I. Basic Orange 14) and its sonodynamic damage under ultrasonic irradiation. Dyes Pigments. 2009; 80(3): 271–278. Publisher Full Text\n\nChen N, Huang Y, Hou X, et al.: Photochemistry of Hydrochar: Reactive Oxygen Species Generation and Sulfadimidine Degradation. Environ Sci Technol. 2017; 51(19): 11278–11287. PubMed Abstract | Publisher Full Text\n\nYong C, Hu C, Qu J, et al.: Photodegradation of tetracycline and formation of reactive oxygen species in aqueous tetracycline solution under simulated sunlight irradiation. J Photochem Photobiol A Chem. 2008; 197(1): 81–87. Publisher Full Text\n\nLi FH, Yao K, Lv WY, et al.: Photodegradation of ibuprofen under UV-Vis irradiation: mechanism and toxicity of photolysis products. Bull Environ Contam Toxicol. 2015; 94(4): 479–483. PubMed Abstract | Publisher Full Text\n\nWang J, Liu Z, Cai R: A new role for Fe3+ in TiO2 hydrosol: accelerated photodegradation of dyes under visible light. Environ Sci Technol. 2008; 42(15): 5759–5764. PubMed Abstract | Publisher Full Text\n\nVahid B, Khataee A: Photoassisted electrochemical recirculation system with boron-doped diamond anode and carbon nanotubes containing cathode for degradation of a model azo dye. Electrochimica Acta. 2013; 88: 614–620. Publisher Full Text\n\nZhang H, Lv X, Li Y, et al.: P25-graphene composite as a high performance photocatalyst. ACS Nano. 2010; 4(1): 380–386. PubMed Abstract | Publisher Full Text\n\nLightcap IV, Kosel TH, Kamat PV: Anchoring semiconductor and metal nanoparticles on a two-dimensional catalyst mat. Storing and shuttling electrons with reduced graphene oxide. Nano Lett. 2010; 10(2): 577–583. PubMed Abstract | Publisher Full Text\n\nVadivel D, Speltini A, Zeffiro A, et al.: Reactive carbons from Kraft lignin pyrolysis: Stabilization of peroxyl radicals at carbon/silica interface. J Anal Appl Pyrol. 2017; 128: 346–352. Publisher Full Text\n\nVadivel D, Malaichamy I: Dataset 1 in: Pyrolytic formation and photoactivity of reactive oxygen species in a SiO2/carbon nanocomposite from kraft lignin. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16080.d218907"
}
|
[
{
"id": "38855",
"date": "05 Oct 2018",
"name": "Simone Lazzaroni",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript is focused on the application of supported stable peroxyl radicals for the photo-degradation of organic materials. In this work the authors systematically investigated the photochemistry of peroxyl radicals in carbon-supported silica (PCS) and then they evaluated the effects of PCS on the methylene blue photodegradation as a model for a generic organic effluent. The manuscript is clearly written with few errors (e.g. “N. Chen et al.” and “Y. Chen et al.” refer to the same reference article). However, the authors have extracted some interesting data that well supports the discussion and the appropriate conclusions. Furthermore, problems such as overpopulation and the lack of drinking water are unfortunately a plague that afflicts our entire planet. I encourage the authors to continue with their research, thus contributing to increase the impact of their study.",
"responses": []
},
{
"id": "46596",
"date": "11 Apr 2019",
"name": "Stefano Crespi",
"expertise": [
"Reviewer Expertise I am currently working in photochemistry",
"photocatalysis",
"and elucidation of reaction mechanisms on the ground and excited state."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nVadivel and Malaichamy report on the activity of stable peroxyl radical species generated on carbon-supported silica after pyrolysis of kraft lignin deposited onto SiO2. The synthetic method applied is crucial to generating carbon radicals on the surface of the catalyst. These species can react with oxygen, readily forming stable ROS on the catalyst itself.\nThe photocatalytic activity of these peroxidic reactants is tested against the photobleaching of methylene blue, providing a complete analysis of the results obtained.\nThe article herein presented has a structured scholar presentation that is based on the authors’ previous work. The literature cited is coherent and adequate with the topic.\nAs minor comments, the authors should pay attention to the citation of some of the references that got mixed up in the version presented, e.g. Y. Chen in the text (ref 16) is stated correctly, however it is wrong in the reference section, where names and surnames are cited incorrectly.\nTo improve the readability, please write “Methylene Blue” in its full extent along with “MB” the first time it appears in the main body of the article, because the extended name is present only in the abstract.\nThe study is based on the experience and methodology that the authors have recently published on Kraft lignin and its pyrolysis. Being experts in the field, they have devised a carefully planned work in all its aspects, comprising the synthetic, photochemical and analytical part.\nThe authors provide the reader with a schematic, yet very precise method section. It is highly appreciated the author’s attention in giving detailed information on the specifics of all the instruments used, e.g. explicitly reporting the wavelength used for the irradiation (366 nm) when in several reports only generic data is given to the reader.\nAll the data are repeated and checked three times. The average and the standard deviation is furnished in the text. The statistical interpretation of the data is adequate to the problem treated.\nThe very low deviation found in the measures testifies the reproducibility of the method, which is remarkable, given the complex matrix analysed. All the data are accurately reported in a spreadsheet furnished as supplementary file to the article.\nThe conclusions of the work are drawn in a schematic yet elegant way, summing up a nice work that is fully supported by the experimental evidence.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1574
|
https://f1000research.com/articles/7-1573/v1
|
28 Sep 18
|
{
"type": "Research Article",
"title": "Healing capacity of bone marrow mesenchymal stem cells versus platelet-rich fibrin in tibial bone defects of albino rats: an in vivo study",
"authors": [
"Dina Rady",
"Rabab Mubarak",
"Rehab A. Abdel Moneim",
"Rabab Mubarak",
"Rehab A. Abdel Moneim"
],
"abstract": "Background: Various techniques for tissue engineering have been introduced to aid the regeneration of defective or lost bone tissue. The aim of this study was to compare the in vivo bone-forming potential of bone marrow mesenchymal stem cells (BM-MSCs) and platelet-rich fibrin (PRF) on induced bone defects in rats’ tibiae. Methods: In total, one defect of 3-mm diameter was created in each tibia of 36 Wistar male rats. There were two groups: group A, left tibia bone defects that received PRF; and group B, right tibia bone defects of the same animal that received BM-MSCs loaded on a chitosan scaffold. Subsequently, Scanning electron microscope/energy-dispersive X-ray (SEM/EDX) analyses was performed at 3 and 10 days, and 3 weeks post‑implantation and following euthanasia; (n=12). Results: The EDX analysis performed for each group and time point revealed a significant increase in the mean calcium and phosphorous weight percentage in the BM-MSC-treated group relative to the PRF-treated group at all-time intervals (P < 0.05). Moreover, the mean calcium and phosphorus weight percentage increased as time progressed since the surgical intervention in the PRF-treated and BM-MSCs groups (P < 0.05). Conclusions: In the present study, both BM-MSCs and PRF were capable of healing osseous defects induced in a rat tibial model. Yet, BM-MSCs promoted more adequate healing, with higher mean calcium and phosphorous weight percentages than PRF at all-time points, and showed greater integration into the surrounding tissues than PRF.",
"keywords": [
"bone regeneration",
"bone marrow derived mesenchymal stem cells",
"platelet rich fibrin."
],
"content": "Introduction\n\nSeveral biomaterials are used to treat bone deficiencies1. Autologous bone graft limitations are related to harvesting process including the quality and quantity of grafted bone and complications at the second surgical site, while allogenic bone grafts carry the risk of disease transmission and immunological rejection. Hence, there are considerable motivations for developing alternative solutions for bone regeneration2. The use of tissue engineering approaches has proven to be effective in inducing bone formation by applying mesenchymal stem cells (MSCs)3 or platelet-rich fibrin (PRF)4. The capacity of bone marrow mesenchymal stem cells (BM-MSCs) for bone repair has been well reported in vivo with promising results; BM-MSCs remain the most widely used source of osteogenic cells in bone tissue engineering studies5–7 MSCs are undifferentiated cells capable of replication8 that have the potential to differentiate along multiple cell lineages, giving rise to cells that form mesenchymal tissues, including bone, cartilage and muscle9. PRF is a second-generation platelet-rich biomaterial10. PRF is derived from a natural and slowly progressive polymerization process occurring during centrifugation, which increases incorporation of the circulating cytokines and growth factors in the fibrin mesh and prevents them from undergoing proteolysis11. In addition, the PRF fibrin matrix provides an optimal support for MSCs which constitute the determining elements responsible for real therapeutic potential12,13. Platelets are active growth factor-secreting cells that initiate wound-healing, connective tissue healing and cell proliferation14. Therefore, PRF is considered as an inexpensive autologous fibrin scaffold prepared in approximately one minute and hence no cost for membrane and bone graft15. In the present research, rats were used as they are easy to handle and less expensive. In addition, breeding cycles are substantially shorter, providing enough animals in a reasonable amount of time16. Research on bone tissue engineering is focused on the development of alternatives to autologous bone grafts for bone reconstruction. Although multiple stem cell-based products and biomaterials are currently being examined, comparative studies are rarely achieved to evaluate the most appropriate approach in this context. The purpose of this study was to compare the regenerative capacity of bone marrow (BM)-MSCs and PRF implanted in surgically induced bone defects in rats’ tibiae.\n\n\nMethods\n\nThe study protocol was approved by the Research ethics committee of Faculty of Dentistry, Cairo University (151031).\n\nA total of 36 male Wistar rats weighing 175–200 g, aged 12–14 weeks-old were used in this study. The animals were obtained from and housed in the Animal house, Faculty of Medicine, Cairo University. The animals were randomly placed in separate cages under controlled room temperature 25±2°C with 12/12 h light/dark cycle and were fed food and water ad libitum.\n\nBM-MSCs were isolated from the femurs of 6 Wistar donor (6-weeks-old male) rats (100±20 g), BM-MSCs isolation and propagation occur in 14 days before experimental procedures under aseptic conditions as previously described17. Briefly, bone marrow was harvested by flushing the femurs with Dulbecco’s modified Eagle’s medium (DMEM, GIBCO/BRL) supplemented with 10% fetal bovine medium (GIBCO/BRL). Cells were isolated with a density gradient [Ficoll/Paque (Pharmacia)] and cultured in culture medium supplemented with 1% penicillin-streptomycin (GIBCO/BRL) at 37°C in a humidified 5% CO2 incubator. When large colonies developed (80–90% confluence), cultures were washed twice with phosphate buffer saline (PBS) and cells were trypsinized with 0.25% trypsin in 1 mM EDTA (GIBCO/BRL) for 5 minutes at 37°C. After centrifugation (at 2400 rpm for 20 minutes), cells were re-suspended with serum-supplemented medium and incubated in 50 cm2 culture flask Falcon. On day 14, the adherent colonies of cells were trypsinized, and counted.17. Cultures confluence was monitored by inverted light microscope (Olympus, USA) with a digital camera (Nikon, Japan).\n\nSurface antigens CD90 and CD34 were detected by flow cytometry to allow identification of BM-MSCs as follows. Following blocking in 0.5% BSA and 2% FBS in PBS, 100,000 cells were incubated in the dark at 4°C for 20 min with the following monoclonal antibodies; FITC CD 90 (PN IM1839U; Beckman Coulter), PE CD 34 (PN IM1871U; Beckman Coulter, USA). Mouse isotype PE antibody (Beckman Coulter, USA) were used as controls (dilution of all antibodies, 1:1500). Cells were washed and suspended in 500 µl fluorescence activated cell sorting (FACS) buffer and analyzed using a Cytomics FC 500 flow cytometer (Beckman Coulter, USA) using CPX software version 2.2. BM-MSCs osteogenic differentiation was induced by StemPro osteogenic induction medium; incubation for 7 days at the third passage, 1 × 103 cells/each well by osteocyte StemPro osteogenesis differentiation kit (Gibco, Life Technology) at concentration 300 µl of osteogenic medium (stempro medium) and identified by alizarin red staining (Sigma-Aldrich) for 30 min at room temprature, the mineralized nodules were stained and monitored using inverted light microscope (Olympus, USA) with a digital camera (Nikon, Japan). The results were presented by descriptive analysis.\n\nThe surgical approach was under general anaesthesia via intramuscular injection of 50-75 mg/kg ketamine chlorohydrate (Amoun CO) and 20 mg/ Kg body weight xylazine HCL (Xyla-Ject®, PhoenixTM, Pharmaceutical Inc.) in the proximal–medial area of each tibia. While blood samples were being prepared, a 3-mm diameter bone defect was created using a round surgical bur3 under constant irrigation with saline solution in both tibiae of the same animal (split-body design) to avoid selection bias and neutralize any confounders that may affect the outcomes of both treatments. Experimental groups were standardized among all the animals: group A, left tibia defect received PRF clot immediately placed by sterile tweezers in the defect; Group B, right tibia of the same animal received BM-MSCs seeded on chitosan scaffold then implanted in the tibial bone defect using sterile spatula. Both groups were randomly sub-divided according to time of euthanasia into three sub-groups (1, 2 and 3); at 3, 10 days and 3 weeks, respectively (n = 12) (Table 1). Postoperatively, periosteum flaps and skin were sutured. Anti-inflammatories and antibiotics were applied on the skin and injected for 3 days. Each animal received IM 10 mg/kg flumox (Eipico, Egypt) to avoid secondary bacterial infection, 10 mg/kg cataflam (Novartis, Egypt) to relieve postoperative pain and topical antibiotic spray; Bivatracin (Egyptian Company For Advanced Pharma, Egypt) to avoid local infection. The animals were euthanized by intra-cardiac overdose of sodium thiopental (80 mg/kg).\n\nTo obtain a porous chitosan scaffold to deliver BM-MSCs into the defect, 1 g chitosan (Merck Germany) was dissolved in 200 µl 0.2% M acetic acid , stored for 1 day at room temperature, poured into a 3-mm diameter stainless steel circular mould, stored in deep freezer at −70°C for 5 days, then lyophilized for 3 days as follows. In the lyophiliser (Thermo Fisher Scientific), there were three phases of preparation. The first phase was freezing phase, where the sample was exposed to −40°C in a vacuum for 10 min. The second phase was warm up vacuum pump phase, where sample was exposed to −15°C in a vacuum for 20 min. The third phase was main drying phase, where sample was exposed to 30°C in a vacuum for 3 days; after the 3 days, a blank porous chitosan scaffold was prepared18,19.\n\nPrior to cell seeding, the lyophilized scaffolds were immersed in absolute ethanol for sterilization. Hydration was accomplished by sequential immersion in serially diluted ethanol solutions of 70, 50, and 25%, during intervals of 30 min each. Scaffolds were finally equilibrated in PBS followed by standard culture medium (30 min; 3 times), and then placed in tissue culture plates ready to be seeded. BM-MSCs were seeded at a density of 2.5x106 cells/scaffold under static conditions, by means of a cell suspension. The seeded scaffold was then placed in the defect to deliver the stem cells to the defect, which was then sutured closed.\n\nA total of 2 ml venous blood was drawn from the caudal vein of rats used in the experiment into a plain tube and immediately centrifuged at room temperature with a lab centrifuge (Electronic centrifuge 800, China) for 10 min at 3000 rpm20. In the middle of the tube, a fibrin clot was formed between the supernatant acellular plasma and the lower red corpuscles. The PRF clot was detached using sterile tweezers applied to the bone defect.\n\nTibiae were carefully dissected free from soft tissue; bone specimens of each group were sectioned using a disc in low speed hand piece under constant irrigation to include the entire defect sites. Specimens were placed in 2.5% buffered glutaraldehyde solution (pH 4.7) for 6 hours. Then, dehydrated in increasing concentrations of ethanol (50, 70, 85, 95 and 100%) for 10 minutes at each concentration. Finally, they were mounted on EM stubs and examined using SEM (Model Quanta 250 FEG FEI Thermo Fisher Scientific, USA, with accelerating voltage 30 K.V., magnification14–1,000,000× and resolution for Gun.1n). The EDX analysis system works as an integrated feature of SEM Quanta FEG 250 attached with EDX unit, (FEI Company, Netherlands). EDX analysis of the bone surfaces was performed, the elemental distribution of phosphorus and calcium (expressed as weight percentage) were determined. Composition scans were collected at randomly selected points in the bone surfaces of the defect using the backscattered electron mode. Data were obtained by calculating the mean of ten independent determinations21.\n\nOne-way analysis of variance (ANOVA) was used to compare different observation times within the same group. This was followed by Tukey’s post hoc test when the difference was found to be significant. A t-test was used to compare between both groups using IBM SPSS 18.0 version 21 for windows (SPSS Inc., Chicago, IL, USA). The significance level was set at p ≤ 0.05.\n\n\nResults\n\nCells were rounded non-adhesive isolated bone marrow cells (Figure 1a). Adhesive cells started to get fusiform spindle shaped fibroblast like cells. Cell confluence was 30–40% (Figure 1b). Cells became adhesive fusiform spindle shaped fibroblast like cells. Cell confluence was 50–70% (Figure 1c). Cells were long and spindle-shaped, reaching the highest confluence (90%) at 14 days (Figure 1d). BM-MSCs were negative for CD34 and positive for CD90 (>98.7%) (Figure 2). BM-MSCs were positively stained with alizarin red (Figure 3).\n\n(a) Cells cultured for 3 days (x100). (b) Cells cultured for 7 days (x100). (c) Cells cultured for 10 days (x100). (d) Cells cultured for 14 days (x100).\n\n(a) Positive for CD90 (98.7%). (b) Negative for CD34 (0.04%).\n\nFibro-cellular tissue and traces of PRF material were seen in sub-group A1 (Figure 4). In sub-group A2, along the margins, bone was actively forming; blood vessels were seen to be emerging and inserted close to the newly formed bone. PRF remnants were observed in the defect centre (Figure 5). Sub-group A3 revealed spongy-like pattern with abundant non-remodelled vascular spaces containing fibro-cellular tissue. In addition, bone formation extended beyond the perimeter of the original defect site compared to sub-group A2 (Figure 6). In sub-group B1, numerous vessels appeared, along with dis-organized architecture of newly formed bone (Figure 7). The bone in sub-group B2 was nearly restored, with new bone extending partly beyond the perimeter of the defect (Figure 8). Sub-group B3 revealed more organized bone architecture; well-oriented thick and smooth interconnecting bone trabeculae with large-centred vascular space. Many blood vessels were noticed. The borders between the newly formed bone and the pre-existing old bone were almost remodeled; no longer detectable in most areas and cannot be distinguished from the cortical surroundings (Figure 9). Raw EDX data are shown in Dataset 122.\n\n(A) x100; (B) x500.\n\n(A) x100; (B) x500.\n\nNote the borders between the newly formed bone and the cortical surroundings were no longer detectable (white arrow). (A) x100; (B) x500.\n\n(A) x100; (B) x500.\n\nNote the interface between new and old bone (white arrows). (A) x100; (B) x500.\n\nNote the more integrated interface between new and old bone. (A) x100; (B) x500.\n\nThere was significant increase in the mean calcium and phosphorous weight percentage of group B relative to group A at all-time intervals. Moreover, calcium and phosphorus weight percent mean value increased by time in both groups (Figure 10 and Table 2–Table 5).\n\nTukey’s post hoc test: mean values with different superscript letters are significantly different from each other.\n\nMean values with different superscript letters are significantly different from each other.\n\n\nDiscussion\n\nBone regeneration using BMSCs was very well reported and standardized in many protocols, Donzelli et al., 2007 showed that the adult rat bone marrow was a suitable source for MSCs that can be easily induced to differentiate into an osteogenic lineage, so they are thought to be a promising candidate, supporting cells for bone reconstruction23. Most in vitro and many in vivo studies have proposed that MSCs possess the ability to increase osteoinduction and osteogenesis24–27.\n\nThrough the current study, BM-MSCs and PRF promoted bone regeneration; where the newly formed bone was almost remodelled and integrated into the surrounding old bone with well-vascularized fibro-cellular tissue. In addition, evidence of osteogenesis was reflected by the presence of blood vessels. However, there was an improved bone regenerative capacity in defects treated with BM-MSCs compared to those treated with PRF. SEM-EDX analysis revealed a significant increase in the mean calcium and phosphorous weight percentage in the BM-MSCs group relative to the PRF group at all-time intervals.\n\nConsidering PRF, growth factors released are postulated to be promoters of tissue regeneration, tissue vascularity, mitosis of MSCs and osteoblasts and rate of collagen formation, playing a key role in the rate and extent of bone formation28. This would help explain the significant increase in calcium and phosphorus weight percentage in the PRF group through the experiment.\n\nAccordingly, there was a marked drop in elemental analysis of calcium and phosphorous in sub-group A1, that increased gradually in sub-groups A2 and A3. This can be explained by the findings of a previous study on the pattern of growth factor release where PRF sustained long-term release of TGF-1 and PDGF-AB that peaked at day 14, which led to increase mineralization then decreased mildly but had a delayed peak of release11.\n\nThe BM-MSC-treated group exhibited more organized bone architecture than the PRF-treated group; sub-group B3 exhibited well-oriented thick and smooth interconnecting bone trabeculae filling the defect compared to sub-group A3; which revealed spongy-like pattern with abundant non-remodelled vascular spaces containing fibro-cellular tissue. The proposed mechanism through which BM-MSCs contribute to bone regeneration enhancement is via maturation into osteoblasts in vivo, or via an indirect pathway by paracrine effects on host stem or progenitor cells29. Notably, a significant increase in the mean calcium and phosphorus weight percentage was observed in BM-MSC-treated group at all time intervals throughout the experiment when compared with the corresponding PRF group. In accordance with the findings of a previous study, SEM/EDX analysis of osteogenic differentiated MSCs seeded on collagen scaffold demonstrated that calcium co-localized with phosphorous, with a gradual increase of both chemical elements observed from day 7 up to very high levels at day 2823. In conclusion, we confirmed that PRF yielded inferior bone formation to that by BM-MSCs after implantation in rat tibiae.\n\n\nData availability\n\nDataset 1. Raw data for EDX analysis and flow cytometry gating graphs for identification of BM-MSCs. Also included are raw SEM images. DOI: https://doi.org/10.5256/f1000research.15985.d21854822.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nKim TH, Kim SH, Sandor GK, et al.: Comparison of platelet-rich plasma (PRP), platelet-rich fibrin (PRF), and concentrated growth factor (CGF) in rabbit-skull defect healing. Arch Oral Biol. England; 2014; 59(5): 550–8. PubMed Abstract | Publisher Full Text\n\nBrennan MÁ, Renaud A, Amiaud J, et al.: Pre-clinical studies of bone regeneration with human bone marrow stromal cells and biphasic calcium phosphate. Stem Cell Res Ther. 2014; 5(5): 114. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLim J, Lee J, Yun HS, et al.: Comparison of bone regeneration rate in flat and long bone defects: Calvarial and tibial bone. Tissue Eng Regen Med. 2013; 10(6): 336–40. Publisher Full Text\n\nBölükbaşı N, Yeniyol S, Tekkesin MS, et al.: The use of platelet-rich fibrin in combination with biphasic calcium phosphate in the treatment of bone defects: a histologic and histomorphometric study. Curr Ther Res Clin Exp. 2013; 75: 15–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLivingston TL, Gordon S, Archambault M, et al.: Mesenchymal stem cells combined with biphasic calcium phosphate ceramics promote bone regeneration. J Mater Sci Mater Med. 2003; 14(3): 211–8. PubMed Abstract | Publisher Full Text\n\nMankani MH, Kuznetsov SA, Robey PG: Formation of hematopoietic territories and bone by transplanted human bone marrow stromal cells requires a critical cell density. Exp Hematol. 2007; 35(6): 995–1004. PubMed Abstract | Publisher Full Text\n\nFang D, Seo BM, Liu Y, et al.: Transplantation of mesenchymal stem cells is an optimal approach for plastic surgery. Stem Cells. 2007; 25(4): 1021–8. PubMed Abstract | Publisher Full Text\n\nMoraleda JM, Blanquer M, Bleda P, et al.: Adult stem cell therapy: dream or reality? Transpl Immunol. Netherlands; 2006; 17(1): 74–7. PubMed Abstract | Publisher Full Text\n\nRen G, Chen X, Dong F, et al.: Concise review: mesenchymal stem cells and translational medicine: emerging issues. Stem Cells Transl Med. Wiley-Blackwell; 2012; [cited 2018 Aug 4]; 1(1): 51–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDohan DM, Choukroun J, Diss A, et al.: Platelet-rich fibrin (PRF): a second-generation platelet concentrate. Part I: technological concepts and evolution. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. United States; 2006; 101(3): e37–44. PubMed Abstract | Publisher Full Text\n\nHe L, Lin Y, Hu X, et al.: A comparative study of platelet-rich fibrin (PRF) and platelet-rich plasma (PRP) on the effect of proliferation and differentiation of rat osteoblasts in vitro. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2009; 108(5): 707–13. PubMed Abstract | Publisher Full Text\n\nKang YH, Jeon SH, Park JY, et al.: Platelet-rich fibrin is a Bioscaffold and reservoir of growth factors for tissue regeneration. Tissue Eng Part A. 2011; 17(3–4): 349–59. PubMed Abstract | Publisher Full Text\n\nDohan Ehrenfest DM, Doglioli P, de Peppo GM, et al.: Choukroun’s platelet-rich fibrin (PRF) stimulates in vitro proliferation and differentiation of human oral bone mesenchymal stem cell in a dose-dependent way. Arch Oral Biol. Elsevier Ltd; 2010; 55(3): 185–94. PubMed Abstract | Publisher Full Text\n\nLaurens N, Koolwijk P, de Maat MP: Fibrin structure and wound healing. J Thromb Haemost. England; 2006; 4(5): 932–9. PubMed Abstract | Publisher Full Text\n\nThorat M, Pradeep AR, Pallavi B: Clinical effect of autologous platelet-rich fibrin in the treatment of intra‐bony defects: a controlled clinical trial. J Clin Periodontol. 2011; 38(10): 925–32. PubMed Abstract | Publisher Full Text\n\nHisting T, Garcia P, Holstein JH, et al.: Small animal bone healing models: standards, tips, and pitfalls results of a consensus meeting. Bone. 2011; 49(4): 591–9. PubMed Abstract | Publisher Full Text\n\nAlhadlaq A, Mao JJ: Mesenchymal Stem Cells: Isolation and Therapeutics. Stem Cells Dev. Mary Ann Liebert, Inc. 2 Madison Avenue Larchmont, NY 10538 USA; 2004; 13(4): 436–48. PubMed Abstract | Publisher Full Text\n\nCosta-Pinto AR, Reis RL, Neves NM: Scaffolds based bone tissue engineering: the role of chitosan. Tissue Eng Part B Rev. United States; 2011; 17(5): 331–47. PubMed Abstract | Publisher Full Text\n\nGarg T, Chanana A, Joshi R: Preparation of Chitosan Scaffolds for Tissue Engineering using Freeze drying Technology. IOSR J Pharm. 2012; 2(1): 72–3. Publisher Full Text\n\nŞenses F, Önder ME, Koçyiğit ID, et al.: Effect of Platelet-Rich Fibrin on Peripheral Nerve Regeneration. J Craniofac Surg. 2016; 27(7): 1759–64. PubMed Abstract | Publisher Full Text\n\nPanduric DG, Juric IB, Music S, et al.: Morphological and ultrastructural comparative analysis of bone tissue after Er:YAG laser and surgical drill osteotomy. Photomed Laser Surg. United States; 2014; 32(7): 401–8. PubMed Abstract | Publisher Full Text\n\nRadi D, Mubarak R, Abdel Moneim R: Dataset 1 in: Healing capacity of bone marrow mesenchymal stem cells versus platelet-rich fibrin in tibial bone defects of albino rats: an in vivo study. F1000Research. 2018. https://www.doi.org/10.5256/f1000research.15985.d218548\n\nDonzelli E, Salvade A, Mimo P, et al.: Mesenchymal stem cells cultured on a collagen scaffold: In vitro osteogenic differentiation. Arch Oral Biol. 2007; 52(1): 64–73. PubMed Abstract | Publisher Full Text\n\nLi H, Dai K, Tang T, et al.: Bone regeneration by implantation of adipose-derived stromal cells expressing BMP-2. Biochem Biophys Res Commun. 2007; 356(4): 836–42. PubMed Abstract | Publisher Full Text\n\nOtsuru S, Tamai K, Yamazaki T, et al.: Bone marrow-derived osteoblast progenitor cells in circulating blood contribute to ectopic bone formation in mice. Biochem Biophys Res Commun. 2007; 354(2): 453–8. PubMed Abstract | Publisher Full Text\n\nHayashi O, Katsube Y, Hirose M, et al.: Comparison of osteogenic ability of rat mesenchymal stem cells from bone marrow, periosteum, and adipose tissue. Calcif Tissue Int. 2008; 82(3): 238–47. PubMed Abstract | Publisher Full Text\n\nde Girolamo L, Lucarelli E, Alessandri G, et al.: Mesenchymal stem/stromal cells: a new ''cells as drugs'' paradigm. Efficacy and critical aspects in cell therapy. Curr Pharm Des. 2013; 19(13): 2459–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnilkumar K, Geetha A, Umasudhakar, et al.: Platelet-rich-fibrin: A novel root coverage approach. J Indian Soc Periodontol. 2009; 13(1): 50–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZachos T, Diggs A, Weisbrode S, et al.: Mesenchymal stem cell-mediated gene delivery of bone morphogenetic protein-2 in an articular fracture model. Mol Ther. 2007; 15(8): 1543–50. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "38853",
"date": "01 Oct 2018",
"name": "Mahmoud M. Al-Ankily",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis report by Rady et al., examines the healing capacity of bone marrow mesenchymal stem cells versus platelet-rich fibrin in tibial bone defects of albino rats. the authors' inclusion of PRF yielded inferior bone formation to that by BM-MSCs after implantation in rat tibiae. The study, although it may be small, adds knowledge to the existing literature. Suggested minor comments would help to improve the impact of this paper:\nMethods\nIsolation, culture and identification of BM-MSCs:(aseptic conditions as previously described) No previously described information found. Establishment of bone defects: what is the depth of the defect? Did it reach the bone marrow spaces? Preparation of PRF: taking 2ml of blood of rats used in the experiment may lead to death of the animal or affecting its healing capacity. it was more safe to use a donor like preparation of BM-MSCs. It was better to compare your results with a control group to evaluate the normal healing capacity with other groups.\nResults\n\nSEM/EDX analysis: Figures (B) x500 is not so clear please do not minimize them to save the magnification benefits.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "38852",
"date": "01 Oct 2018",
"name": "Mohamed Shamel",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFirst of all, I would like to congratulate the authors for attempting to undertake this project which I found very interesting and of valuable additional knowledge. The manuscript itself is well-written and well-structured. The authors of this study have investigated the Healing capacity of bone marrow mesenchymal stem cells versus platelet-rich fibrin in tibial bone defects of albino rats. Based on the study results, the authors have concluded that BM-MSCs promoted more adequate healing, with higher mean calcium and phosphorous weight percentages than PRF at all-time points, and showed greater integration into the surrounding tissues than PRF. However, the authors need to address the following minor remarks:\nMethods: The auhtors mentioned \"experimental procedures under aseptic conditions as previously described\", however I found no previously described information.\nResults: 1. In vitro evaluation of BM-MSCs: It is not clear in the text the shape of cells which were found belongs to which group, 3 days or 7 days?\n2. Alizarin red staining was used however no sufficient information was provided for the benefits of using this stain.\nDiscussion: Discussion of the results is quite comprehensive. In analyzing the results, the authors also show citations from the previous study to support the explanation of these results.\nConclusions: I think the authors should have added more points to conclude the hard work they have done.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "38851",
"date": "02 Oct 2018",
"name": "Reham Magdy Amin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe Idea of research article\n\n\"Healing capacity of bone marrow mesenchymal stem cells versus platelet-rich fibrin in tibial bone defects of albino rats: an in vivo study\" by Rady et al., is interesting and valuable as it is comparing two different ways for enhancing bone defect healing in vivo which could be a base for clinical application .\nOverall, the paper is clear, substantially easy to read and well constructed but still, there are suggested minor comments the authors could deal with, or at least discuss for additional impact.\n\nMethods\nIt is worthy mentioning the number of rats per cage Further details for clarifying the methodology and aseptic conditions of BM-MSCs isolation from the femurs of donor rats will be a valuable add. It is recommended to mention how the depth of bone defect was controlled to be standard in all experiment induced defects . It is an add to mention the place of the lab where the BM-MSCs isolation procedures were done. You need to give further details about the size and methodology of BM-MScs and PRF pellet loading in the bone defect\n\nResults\nFig. 3 need show monitoring of the positively stained calcified nodules as mentioned in the methodology Recommendations would be worth mentioning depending on the paper results\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "38850",
"date": "04 Oct 2018",
"name": "Mahmoud M. Bakr",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverall the study was well performed and showed some interesting results. After looking at referees' reports which summarized some revisions required for this paper, I would like to add the following:\nA statement needs to be added in the methods outlining the power analysis performed to determine the sample size in each group. Despite the localized effect of BM-MSCs and PRF being applied on a scaffold, the authors should add in their discussion the potential systemic effect of both treatments especially that the choice was made to have both defects created in the same animal (Right and left tibis with different treatments). I am guessing that this was done to reduce the number of animals used in the experiment. However, the systemic effect of treatments should be addressed as a limitation of the study and/or a discussion should made to clarify how the systemic effect could not have contributed to the significant results. Statistical analysis: A two ANOVA analysis should be more approriate to use in order to investigate the single main effects of time and treatment as well as the possible interaction between treatment and time. This will add more strength to the results rather than using post hoc results and t-tests.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1573
|
https://f1000research.com/articles/7-1568/v1
|
28 Sep 18
|
{
"type": "Research Article",
"title": "Burden of cytomegalovirus reactivation post kidney transplant with antithymocyte globulin use in Thailand: A retrospective cohort study",
"authors": [
"Maria N. Chitasombat",
"Siriorn P. Watcharananan",
"Siriorn P. Watcharananan"
],
"abstract": "Background: Cytomegalovirus (CMV) is an important cause of infectious complications after kidney transplantation (KT), especially among patients receiving antithymocyte globulin (ATG). CMV infection can result in organ dysfunction and indirect effects such as graft rejection, graft failure, and opportunistic infections. Prevention of CMV reactivation includes pre-emptive or prophylactic approaches. Access to valganciclovir prophylaxis is limited by high cost. Our objective is to determine the burden and cost of treatment for CMV reactivation/disease among KT recipients who received ATG in Thailand since its first use in our center. Methods: We conducted a single-center retrospective cohort study of KT patients who received ATG during 2010-2013. We reviewed patients’ characteristics, type of CMV prophylaxis, incidence of CMV reactivation, and outcome (co-infections, graft function and death). We compared the treatment cost between patients with and without CMV reactivation. Results: Thirty patients included in the study had CMV serostatus D+/R+. Twenty-nine patients received intravenous ganciclovir early after KT as inpatients. Only three received outpatient valganciclovir prophylaxis. Incidence of CMV reactivation was 43%, with a median onset of 91 (range 23-1007) days after KT. Three patients had CMV end-organ disease; enterocolitis or retinitis. Infectious complication rate among ATG-treated KT patients was up to 83%, with a trend toward a higher rate among those with CMV reactivation (P = 0.087). Patients with CMV reactivation/disease required longer duration of hospitalization (P = 0.018). The rate of graft loss was 17%. The survival rate was 97%. The cost of treatment among patients with CMV reactivation was significantly higher for both inpatient setting (P = 0.021) and total cost (P = 0.035) than in those without CMV reactivation. Conclusions: Burden of infectious complications among ATG-treated KT patients was high. CMV reactivation is common and associated with longer duration of hospitalization and higher cost.",
"keywords": [
"antithymocyte globulin",
"cytomegalovirus",
"burden",
"kidney transplantation"
],
"content": "Introduction\n\nHuman cytomegalovirus (CMV) is an important cause of infectious complications after kidney transplantation (KT)1. CMV infection can result in end-organ diseases and indirect effects such as opportunistic infections, graft rejection, and graft failure2. Since the introduction of antithymocyte globulin (ATG) in transplantation, the incidence of CMV reactivation has increased up to 10–50%3–9. To prevent CMV reactivation, prophylactic and pre-emptive approaches are almost equally effective. CMV prophylaxis reduces the incidence of CMV disease and associated mortality in solid organ transplant recipients10. However, the cost of prophylaxis is high. Data from our center prior to ATG use showed that the incidence of symptomatic CMV reactivation among KT recipients (CMV D+/R+), was low (4.6%)11. In recent years, the use of ATG has been implemented nationwide.\n\nIn this study performed in Thailand, we evaluated the burden of symptomatic CMV reactivation/disease following the use of ATG in situations where CMV prophylaxis was not widely available and affordable. We also evaluated the outcome of ATG-treated patients in terms of infectious complications, graft loss, and cost of treatment in patients who developed CMV reactivation/disease and those who did not.\n\n\nMethods\n\nThis was a retrospective cohort study of all ATG-treated (induction/antirejection therapy) KT patients aged ≥15 years at Ramathibodi Hospital, Bangkok, Thailand between January 2010 and July 2013. At our institution, routine oral antimicrobial prophylaxis included 1 year acyclovir (withheld during the period of anti-CMV exposure), 9 months isoniazid and 1 year cotrimoxazole. The strategy for CMV prophylaxis or pre-emptive therapy was based on the physician’s decision. Blood CMV viral load was monitored.\n\nThis study was approved by the Institutional Ethics Committee of Ramathibodi Hospital, Mahidol University (#12-56-24). For this retrospective study, formal informed consent was not required by the committee.\n\nWe collected data from the records of patients on: demographic characteristics; underlying disease; type of KT; details of induction regimen and maintenance immunosuppression; serum creatinine; CMV serostatus of donors and recipients; CMV prophylaxis; clinical course; post-KT infectious complications; graft rejection; laboratory parameters at the time of CMV reactivation/disease (complete blood count, chemistry, liver function tests, immunosuppressive drug level, plasma CMV viral load) (COBAS Amplicor Monitor test; Roche Molecular Diagnostics); and treatment for CMV reactivation/disease. Outcomes including infectious complications, graft rejection, and death were measured at 3 and 6 months after KT until the end of the study in January 2014.\n\nThe cost of transplantation was analyzed among 26 KT patients (excluding four with missing data). We collected data for ganciclovir/valganciclovir use (duration and dosage) and medical expenses (overall cost of hospitalization and treatment, outpatient visits, emergency room visits, medication, laboratory tests, and imaging) from the initial hospital admission for KT until 6 months and at the end of study in January 2014. The direct cost of treatment for CMV infection/disease was not available because there was no system in the hospital to extract the specific data. We calculated the cost in US$ (2014 conversion rate of 32.506 THB to 1 US$) of prophylaxis with valganciclovir, with dose adjustment for glomerular filtration rate (GFR) for each patient according to serum creatinine at discharge.\n\nDefinition of CMV reactivation/disease was based on that of Ljungman et al.2. CMV reactivation was defined as new detection of CMV infection (plasma CMV viral load was used in this study) in patients who had previously had CMV serostatus positive (R+). CMV gastrointestinal disease was defined by combination of gastrointestinal symptoms, endoscopic mucosal lesions, and demonstration of CMV infection by histopathological examination, immunohistochemical analysis, or in situ hybridization of gastrointestinal tract biopsy specimens. CMV retinitis was diagnosed by an ophthalmologist from examination of typical lesions.\n\nData were presented as median (range) and number (%). Categorical variables among patient groups were compared using the χ2 or Fisher’s exact test, and continuous variables were compared using the Mann–Whitney U test. Statistical analyses were performed by SPSS software version 17.0 (IBM SPSS Statistics, Chicago, Illinois, USA).\n\n\nResults\n\nA total of 30 KT patients received ATG during the study period. Patients’ characteristics are shown in Table 1. The majority of patients (n = 26; 87%) resided in rural areas. Six (20%) had a second KT, and 16 (53%) had living donor KT. ATG was used for induction therapy in 23 (77%) patients and antirejection therapy in seven. The total median ATG dose was 225 (105-700) mg. The maintenance regimen included mycophenolate mofetil, tacrolimus and prednisolone (n = 22, 73.3%); mycophenolate mofetil, cyclosporine and prednisolone (n = 4, 13.3%); cyclosporine, everolimus and prednisolone (n = 2, 6.6%); sirolimus, mycophenolate mofetil and prednisolone (n = 1, 3.3%); and everolimus, mycophenolate mofetil and prednisolone (n = 1, 3.3%). Delayed graft function occurred in 13 (43.3%) patients. Inpatient post-KT CMV prophylaxis with intravenous ganciclovir was given to 29 (96.6%) patients for a median duration of 13 (2-55) days. The median duration of hospitalization post-KT was 28 (16-78) days. Upon discharge, 16 (53%) patients had impaired graft function [GFR 40-59 ml/min in six (20%) patients, and 25-39 ml/min in five (17%) and 10-24 ml/min in five]. Two patients required hemodialysis at discharge because of early graft loss from severe antibody-mediated rejection. Outpatient CMV prophylaxis with valganciclovir was given to three (10%) patients. Rejection was diagnosed in 13 (43%) patients, but only 10 (76.9%) cases were confirmed by kidney biopsy. The median time to diagnosis of rejection was 13 (1-266) days. The types of rejection in 10 patients included antibody-mediated rejection (80%), cellular rejection (10%), and combined antibody and cellular rejection (10%). The details of antirejection therapy are described in Table 1.\n\n†Renal calculi and polycystic kidney disease; ‡ATG and IL-2 antagonist (n=1), IL-2 antagonist, rituximab and bortezemib (n=1); §sirolimus (n = 1, 3%; 1 mg/day), everolimus (n = 3, 10%; 3 (2–4) mg/day); ||pulse methylprednisolone and ATG (1, 8%), pulse methylprednisolone, ATG, IVIG and plasmapheresis (3, 23%), ATG and plasmapheresis (1, 8%), ATG, IVIG and plasmapheresis (1, 8%). D+, Donor CMV seropositive; R+ recipient CMV seropositive; HLA, human leukocyte antigen; PRA, panel reactive antibody; IVIG, Intravenous immunoglobulin; Cr, creatinine; KT, kidney transplantation.\n\nThe median duration of follow-up was 542 (134-1583) days after KT. None of the patients who received valganciclovir prophylaxis developed CMV reactivation/disease. Thirteen patients developed CMV reactivation/disease. Six (46%) had low-grade CMV viremia without end-organ disease that spontaneously resolved after reduced immunosuppression. Four patients had CMV viremia plus fever, leukopenia and thrombocytopenia, which were compatible with CMV syndrome. Three patients had CMV end-organ disease: two with gastrointestinal disease and one with retinitis. The median onset of CMV reactivation/disease was 91 (23-1007) days after KT. Seven patients required anti-CMV therapy with a median duration of 25 (2-75) days, and intravenous ganciclovir for 12 (2-56) days. Laboratory parameters at the time of CMV reactivation/disease were: median white blood cell count 6,585 (3,082-9,962) cells/mm3; 11 patients (85%) had lymphopenia, with a median absolute lymphocyte count of 519 (322-1,252) cells/mm3; and median serum creatinine was 1.37 (0.65–6.95) mg/dl. Infectious complications occurred in 25 (83%) patients (Table 2). Pneumocystis jirovecii pneumonia occurred in four patients who did not received cotrimoxazole at the time of diagnosis. Only one patient (ABO incompatibility) died at 266 days after KT because of several infectious complications (Pseudomonas aeruginosa septicemia, P. jirovecii pneumonia, invasive pulmonary aspergillosis, and disseminated Mycobacterium abscessus infection). Patient outcomes are shown in Table 1. ATG-treated KT patients with CMV reactivation/disease required longer duration of hospitalization after KT, with a median duration of 40 (21–78) days compared with patients without CMV reactivation of 26 (16–61) days (P = 0.018).\n\nNo CMV, patients with no evidence of CMV reactivation/diseases; CMV, patients with CMV reactivation/diseases. ¶Candida urinary tract infection (n = 3), candidemia (n = 2), invasive pulmonary aspergillosis (n=1), and disseminated histoplasmosis (n = 1). †BK-virus-associated nephropathy (n = 1), parvovirus-B19-associated pure red cell aplasia (n = 1), disseminated varicella zoster infection (n = 1), and rhinovirus lower respiratory tract infection (n = 1). ‡Disseminated Mycobacterium tuberculosis infection (n = 1), Mycobacterium hemophilum soft tissue infection (n = 1), disseminated Mycobacterium abscessus infection (n = 1) PJP, Pneumocystis jirovecii pneumonia\n\nThe cost of KT was analyzed among 26 patients (excluding four with missing data) (Table 3). The cost of 100-day inpatient post KT, total inpatient post KT, and total post KT was significantly higher among patients with CMV reactivation/disease (P < 0.05). The cost of valganciclovir for patients with normal GFR (900 mg/day) for 100 and 180 days was US$ 7,900 and US$ 14,220, respectively. We calculated the median cost of valganciclovir prophylaxis according to GFR in each patient upon discharge of KT to 100-day and 200-day was US$2716 (range; US $210-6,336), and US $5,431 (range; US $420-12,673), respectively.\n\nKT, kidney transplantation; P value calculated by Mann–Whitney U test.\n\n\nDiscussion\n\nThere is a lack of data about the burden of CMV reactivation/disease among KT recipients with CMV D+/R+ treated with ATG in Thailand. In Thailand, CMV prophylaxis is not widely available because of the high cost of valganciclovir. Pre-emptive treatment with plasma CMV viral load monitoring is difficult to achieve because of the need for frequent visits to the transplantation center. Our study is believed to be the first in Thailand to assess the burden of CMV reactivation after KT with ATG treatment. The incidence of CMV reactivation among CMV D+/R+ patients in our study was higher than in a previous study from our center prior to the use of ATG (53% vs.16.5 %)11. The incidence was similar to that in studies from Kuwait (43%) and Germany (53.8%)12,13. CMV reactivation is known to have immunomodulatory effects in transplant patients, resulting in allograft dysfunction and other infectious complications2. The overall rate of opportunistic infection is high in ATG-treated patients, and there is a trend toward higher co-infection rates among patients with CMV reactivation/disease. The rate of graft rejection/loss was not increased among patients with CMV reactivation. However, the lack of statistical power that resulted from the small number of patients makes it impossible to draw firm conclusions.\n\nFinancial burden is a major problem in resource-limited settings. We performed a preliminary analysis of outcome in terms of cost, differentiating among patients with and without CMV reactivation/disease. We could not perform a full health economic analysis because of the small sample size. We demonstrated that the cost of KT was higher among patients with CMV reactivation/disease as a result of longer hospitalization, which was partly related to treatment of infectious complications. The cost within 100 days after KT was high (US$ 18,667) compared with that in a study from Chile (US $11,186)14. The possible reasons were the use of high-cost treatment including ATG, intravenous immunoglobulin, rituximab, and plasmapheresis in our study. A study from the US with similar patients showed that the cost was high, up to US$ 49,000 per admission15. In a study from Australia, the cost at 12 months after KT was AU$ 89,188, 85,227, and 88,860 for no induction, induction with anti-interleukin-2, and induction with ATG, respectively16. Most studies have focused on the cost of induction/rejection therapy; however, the cost after KT including treatment of infectious complications was not included.\n\nOne analysis has shown that universal CMV prophylaxis is cost-effective in KT patients with CMV R+17. The lack of CMV prophylaxis could lead to even higher costs because of the cost of hospitalization among CMV D+/R– patients18. A study from China revealed that the cost of KT was mainly related to drug treatment rather than hospitalization, which is different from the situation in western countries19. In our study, the lack of universal valganciclovir prophylaxis in KT patients who received ATG induction was because of the high cost of medication. The alternative pre-emptive approach with monitoring CMV viral load for R+ patients has been described as an effective strategy18. However, in our center an adequate pre-emptive approach was not possible because of the need for frequent follow-up visits, which were not feasible for most patients who resided in rural areas.\n\nAn economic model that simulated long-term costs and outcomes of prolonged prophylaxis with valganciclovir in a cohort of 10,000 D+/R– KT patients revealed that 200-days prophylaxis was more cost-effective than a 100-day regimen, with drug cost estimated based on normal GFR20. In our study, most patients had impaired graft function (low GFR) as a result of the extended use of deceased donors21. We calculated that the cost of valganciclovir prophylaxis according to low GFR at discharge up to 100-day was substantially less than the cost estimated with normal GFR. Our study demonstrated that the cost after KT among patients with CMV reactivation/disease was significantly higher than in those without CMV reactivation/disease.\n\nThe limitations of our study included a small sample size from a single-center retrospective study, and cost or sensitivity analysis of the cost-effectiveness was not intended. The heterogeneity of our patients was high, which limited the economic conclusions of this study. Another potential bias was the use of rituximab in few patients and the duration of follow-up time after KT. Generalization of our data should be done with caution, depending on the institutional protocol for KT. Future CMV prophylaxis or pre-emptive treatment in resource-limited settings should be evaluated in a prospective study with a larger sample size.\n\n\nConclusions\n\nOur study highlighted the burden of CMV reactivation/disease and opportunistic infections in ATG-treated KT patients in a developing country where routine CMV prophylaxis may not be affordable.\n\n\nData availability\n\nDataset 1: Raw data for the study ‘Burden of cytomegalovirus reactivation post kidney transplant with antithymocyte globulin use in Thailand: A retrospective cohort study’, 10.5256/f1000research.16321.d21902822",
"appendix": "Grant information\n\nThe author(s) declared that there were no grants involved in supporting this work.\n\n\nAcknowledgments\n\nWe thank Atiporn Ingsathit MD., PhD. for her suggestions of the statistical analysis. The data reported here were presented as a poster presentation at the IDWeek 2014, 8–12 October 2014, Philadelphia, USA. We thank Cathel Kerr, PhD, from Edanz Group for editing a draft of this manuscript.\n\n\nReferences\n\nDe Keyzer K, Van Laecke S, Peeters P, et al.: Human cytomegalovirus and kidney transplantation: a clinician's update. Am J Kidney Dis. 2011; 58(1): 118–26. PubMed Abstract | Publisher Full Text\n\nLjungman P, Griffiths P, Paya C: Definitions of cytomegalovirus infection and disease in transplant recipients. Clin Infect Dis. 2002; 34(8): 1094–7. PubMed Abstract | Publisher Full Text\n\nLebranchu Y, Bridoux F, Büchler M, et al.: Immunoprophylaxis with basiliximab compared with antithymocyte globulin in renal transplant patients receiving MMF-containing triple therapy. Am J Transplant. 2002; 2(1): 48–56. PubMed Abstract | Publisher Full Text\n\nMourad G, Garrigue V, Squifflet JP, et al.: Induction versus noninduction in renal transplant recipients with tacrolimus-based immunosuppression. Transplantation. 2001; 72(6): 1050–5. PubMed Abstract | Publisher Full Text\n\nJamil B, Nicholls KM, Becker GJ, et al.: Influence of anti-rejection therapy on the timing of cytomegalovirus disease and other infections in renal transplant recipients. Clin Transplant. 2000; 14(1): 14–8. PubMed Abstract | Publisher Full Text\n\nZamora MR: Controversies in lung transplantation: management of cytomegalovirus infections. J Heart Lung Transplant. 2002; 21(8): 841–9. PubMed Abstract | Publisher Full Text\n\nHuurman VA, Kalpoe JS, van de Linde P, et al.: Choice of antibody immunotherapy influences cytomegalovirus viremia in simultaneous pancreas-kidney transplant recipients. Diabetes Care. 2006; 29(4): 842–7. PubMed Abstract | Publisher Full Text\n\nOzaki KS, Pestana JO, Granato CF, et al.: Sequential cytomegalovirus antigenemia monitoring in kidney transplant patients treated with antilymphocyte antibodies. Transpl Infect Dis. 2004; 6(2): 63–8. PubMed Abstract | Publisher Full Text\n\nBüchler M, Hurault de Ligny B, Madec C, et al.: Induction therapy by anti-thymocyte globulin (rabbit) in renal transplantation: a 1-yr follow-up of safety and efficacy. Clin Transplant. 2003; 17(6): 539–45. PubMed Abstract | Publisher Full Text\n\nHodson EM, Ladhani M, Webster AC, et al.: Antiviral medications for preventing cytomegalovirus disease in solid organ transplant recipients. Cochrane Database Syst Rev. 2013; (2): CD003774. PubMed Abstract | Publisher Full Text\n\nWatcharananan SP, Louhapanswat S, Chantratita W, et al.: Cytomegalovirus viremia after kidney transplantation in Thailand: predictors of symptomatic infection and outcome. Transplant Proc. 2012; 44(3): 701–5. PubMed Abstract | Publisher Full Text\n\nSaid T, Nampoory MR, Johny KV, et al.: Cytomegalovirus prophylaxis with ganciclovir in kidney transplant recipients receiving induction antilymphocyte antibodies. Transplant Proc. 2004; 36(6): 1847–9. PubMed Abstract | Publisher Full Text\n\nWitzke O, Hauser IA, Bartels M, et al.: Valganciclovir prophylaxis versus preemptive therapy in cytomegalovirus-positive renal allograft recipients: 1-year results of a randomized clinical trial. Transplantation. 2012; 93(1): 61–8. PubMed Abstract | Publisher Full Text\n\nDominguez J, Harrison R, Atal R: Cost-benefit estimation of cadaveric kidney transplantation: the case of a developing country. Transplant Proc. 2011; 43(6): 2300–4. PubMed Abstract | Publisher Full Text\n\nTanriover B, Wright SE, Foster SV, et al.: High-dose intravenous immunoglobulin and rituximab treatment for antibody-mediated rejection after kidney transplantation: a cost analysis. Transplant Proc. 2008; 40(10): 3393–6. PubMed Abstract | Publisher Full Text\n\nMorton RL, Howard K, Webster AC, et al.: The cost-effectiveness of induction immunosuppression in kidney transplantation. Nephrol Dial Transplant. 2009; 24(7): 2258–69. PubMed Abstract | Publisher Full Text\n\nLuan FL, Kommareddi M, Ojo AO: Universal prophylaxis is cost effective in cytomegalovirus serology-positive kidney transplant patients. Transplantation. 2011; 91(2): 237–44. PubMed Abstract | Publisher Full Text\n\nHellemans R, Beutels P, Ieven M, et al.: Cost analysis in favor of a combined approach for cytomegalovirus after kidney transplantation: a single-center experience. Transpl Infect Dis. 2013; 15(1): 70–8. PubMed Abstract | Publisher Full Text\n\nZhao W, Zhang L, Han S, et al.: Cost analysis of living donor kidney transplantation in China: a single-center experience. Ann Transplant. 2012; 17(2): 5–10. PubMed Abstract | Publisher Full Text\n\nBlumberg EA, Hauser IA, Stanisic S, et al.: Prolonged prophylaxis with valganciclovir is cost effective in reducing posttransplant cytomegalovirus disease within the United States. Transplantation. 2010; 90(12): 1420–6. PubMed Abstract | Publisher Full Text\n\nSnyder RA, Moore DR, Moore DE: More donors or more delayed graft function? A cost-effectiveness analysis of DCD kidney transplantation. Clin Transplant. 2013; 27(2): 289–96. PubMed Abstract | Publisher Full Text\n\nChitasombat MN, Watcharananan SP: Dataset 1 in: Burden of cytomegalovirus reactivation post kidney transplant with antithymocyte globulin use in Thailand: A retrospective cohort study. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16321.d219028"
}
|
[
{
"id": "38840",
"date": "03 Oct 2018",
"name": "Siraya Jaijakul",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript Chitasombat and Watcharananan conducted a study of the burden and cost of treatment for CMV reactivation/disease among CMV serostatus D+/R+ kidney transplant (KT) recipients who received antithymocyte globulin (ATG) in a single-center in Thailand, which has limited healthcare resources. The study has showed that due to high cost of valganciclovir, patients were not able to afford the medication for using as universal CMV prophylaxis post KT. Consequently, since ATG could increase risk of developing CMV reactivation/disease in KT recipients, patients required longer hospitalization resulting in more financial burden.\n\nThe manuscript is well-written and well-organized. As mentioned by authors, generalization of the study is limited by a small sample size from a single-center setting. The result of 180-day inpatient cost post KT was not significant higher in CMV reactivation/disease group which could be due to a small sample size but we still could see the trend of higher financial burden among CMV reactivation/disease group.\n\nA prospective study with a larger sample size or even in multi-center setting is warranted since this could impact treatment approaches for KT patients in Thailand. If CMV prophylaxis or pre-emptive treatment could be done, it would be interesting to see what would be an outcome and how much would it be for the cost of a serial CMV PCR monitoring and outpatient visits in the setting.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "38839",
"date": "12 Nov 2018",
"name": "Adisorn Lumpaopong",
"expertise": [
"Reviewer Expertise Transplantation",
"tropical kidney disease"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study mentions about burden of CMV reactivation in KT patients who underwent transplantation. All patients had serology as D+/R+ and received ATG as induction. Since ATG is a highly immunosuppressive agent, the results of this study reveals CMV reactivation is common and other infectious complications also high. Medical expenses demonstrates different among CMV reactivation and without CMV group. As we know, cost of oral valganciclovir is expensive and need long period for prophylaxis. Preemptive treatment and frequent follow up might provide benefits in this patient group in Thailand that will provide cost saving for this situation.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1568
|
https://f1000research.com/articles/7-1567/v1
|
28 Sep 18
|
{
"type": "Case Report",
"title": "Case Report: Sporadic Burkitt lymphoma misdiagnosed as dental abscess in a 15-year-old girl",
"authors": [
"Marco Cabras",
"Paolo G. Arduino",
"Luigi Chiusa",
"Roberto Broccoletti",
"Mario Carbone",
"Paolo G. Arduino",
"Luigi Chiusa",
"Roberto Broccoletti",
"Mario Carbone"
],
"abstract": "Background: Burkitt lymphoma (BL) is a non-Hodgkin’s B-cell tumor that can be classified into three variants, based on clinical characteristics and epidemiology: endemic, human immunodeficiency-related and sporadic. Oral sporadic BL is quite an unusual entity, with the gastrointestinal trait being often the first site of appearance. Clinical finding: A 15-year-old patient that presented a symptomatic swelling of the right maxilla, unsuccessfully treated as a primary endodontic disease, displaying solid tissue on CT scan, “starry sky” pattern on oral biopsy, multifocal bone and lymph node uptake on PET. Diagnoses, interventions, and outcomes: A diagnosis of stage IV BL was formulated; Rituximab was then administered for three months according to Inter-B-NHL ritux 2010 protocol and CYM (cytarabine and methotrexate) chemotherapy. The patient was followed-up for three years, with no recurrence. Conclusion: It is important for general dental practitioners to suspect a malignancy in the differential diagnosis of unresponsive odontogenic infections in young healthy patients.",
"keywords": [
"Burkitt lymphoma",
"dental abscess",
"oral cavity",
"paediatric"
],
"content": "Introduction\n\nBurkitt lymphoma (BL) is a mature, aggressive high-grade B-cell non-Hodgkin’s lymphoma, which occurs in three distinctive subtypes: endemic (African), human immunodeficiency-related and sporadic (nonendemic)1. Endemic BL, which was the first to be described as a “sarcoma” by Denis Burkitt in African children 60 years ago2, occurs mostly among six-year-old males of equatorial Africa and Papua New Guinea, mainly within the maxillofacial complex, with an estimated 50% of cases detected in jaws or facial bones. Sporadic BL is typically observed in Western countries, with a European incidence of 2.2 cases per million, affecting mostly young adult Caucasian males, frequently within the abdomen, particularly in the ileocecal trait1.\n\nThe oral cavity is rarely the first site of onset. In this report, we describe the peculiar case of an IV-stage BL arisen as a maxillary swelling in a 15-year-old girl, misdiagnosed at first as endodontic disease.\n\n\nCase report\n\nIn November 2015, a healthy 15-year-old female was referred to our Department with chief complaint of dull pain on the permanent maxillary right second molar, on whom the general dental practitioner had already performed root canal treatment and administration of two grams of amoxicillin daily for one week, to no avail. Further questioning revealed an associated hypoesthesia of the right lower lip.\n\nConventional oral examination showed an overall swelling of the gingiva surrounding the painful teeth and extending to the right palatal mucosa (Figure 1), whereas no signs of oral disease could be detected in the lower lip. Orthopantomography (OPT) showed no signs of odontogenic disease, being completely unremarkable (Figure 2).\n\nSwelling of the right maxillary alveolar ridge, expanded to the right hard palate.\n\nDue to the unreliability of OPT and the unresponsiveness to the combination of root canal and antibiotic treatments, a contrast-enhanced CT scan of the maxillo-facial district was urgently required, revealing solid tissue with high uptake in the right maxilla, with osteolysis and disruption of the floor of the right maxillary sinus (Figure 3).\n\nAssial view showing solid mass in right maxilla causing destruction of the maxillary sinus floor.\n\nA field mapping biopsy was conducted, collecting samples from gingiva, inter-radicular tissue from the extracted permanent maxillary right second molar, palatal bone and mucosa. Histology showed in each slide a diffuse infiltration of monomorphic, medium size cells with scarce basophilic cytoplasm, non-cleaved round nuclei, mitotically active with a high rate of spontaneous apoptosis (Figure 4a). Immunophenotyping revealed an abnormal B-lymphocyte population (75% of cellular events), positive for CD20, with mild co-expression of CD10 c-Myc, and Immunoglobulin lambda light chains. Fluorescent in situ hybridization (FISH) revealed IgH/Myc translocation in 70% of the nuclei, being negative for IgH/BCL2 and IgH/BCL6 translocation. CISH staining for EBV-encoded RNA (EBER) transcript was widely positive (Figure 4b).\n\n(A) Hematoxylin & eosin 20x revealing “Starry-sky” pattern from pleomorphic and highly apoptotic lymphocytes and macrophages; (B) EBER 20x in situ hybridization positive to EBV-encoded RNA.\n\nThe young patient was referred to the Oncohaematology Paediatric Unit of the “Ospedale Infantile Regina Margherita”, Turin, with a clinicopathological diagnosis of Burkitt Lymphoma.\n\nHere, an 18F-FDG PET/CT was required, highlighting intense uptake in maxilla, shoulder blades, right humerus, dorso-lumbar vertebrae and pelvis. Given the combination of clinical signs, microscopy findings and diagnostic imaging, a diagnosis of stage IV Burkitt lymphoma was formulated. The patient was treated with an association of Rituximab according to Inter-B-NHL ritux 2010 protocol and CYM (cytarabine and methotrexate) chemotherapy, between December 2015 and March 2016.\n\nIn May 2016, a PET/CT scan showed a complete remission of the disease; clinical oral examination showed the complete remission of gingiva-palatal enlargement. Since then, the currently 18-year-old patient appears to be in good health.\n\n\nDiscussion\n\nOral sporadic Burkitt lymphoma (sBL) is a rare clinical entity among children, with few case reports published to date3–13. To the best of our knowledge, this is the first detailed report of oral sBL in a teenage patient in Italy.\n\nCase-series available worldwide14–18 show an infrequent involvement of the mouth as the first site, accounting for 3%14,15, 9.5%16, up to 16%17,18 of all sBL. Clinically, oral sBL acts as a fast growing, rapidly expanding tumor, which may cause dull, toothache-like pain, teeth malposition, occlusal precontact with subsequent difficulty in chewing and open-bite;6 in some cases sudden teeth loosening can be elicited6,11.\n\nSuch behavior can easily be mistaken for an odontogenic disease, leading to the general dental practitioner (GDP) to administer antibiotics and perform unnecessary procedures, such as root canal treatments of the teeth closest to the swollen or painful area, as the presented case and a previous report have shown3, causal periodontal treatment11 or even teeth extractions9,10. Moreover, when tonsil is the primary site, dysphagia and obstructive sleep apnoea may occur5,12.\n\nPanoramic radiograph can be either unremarkable, as in our case and a similar report4, or can reveal an ill-defined isolated7,9 or multifocal11 radiolucency with loss of lamina dura and periodontal ligament6,7,9,13, sometimes displaying a worrisome “floating teeth” appearance6,7. Thus, cervicofacial CT is mandatory to properly assess the degree of disruption of the cortical bone and the status of the neighbouring naso-orbital areas5,8,11. On the other hand, an 18-FDG PET may be needed either to fulfil the diagnostic work-up when suspecting a widespread lymphoma5,10, or used as a monitoring tool, especially in the first 12 months after diagnosis, where relapse can be more frequently encountered12. Ultimately, an in-depth histologic analysis, comprehensive of immunophenotyping showing CD20 positive clones and FISH revealing of IgH/Myc translocation will be the key to unravel the differential diagnosis with a wide range of diseases which may share overlapping symptoms and clinico-radiological signs, such as osteomyelitis, benign odontogenic epithelial neoplasms, other types of lymphomas, Langerhans cell histiocytosis, Ewing’s sarcoma, osteosarcoma, chondrosarcoma, neurosarcoma and fibrosarcoma4,7,10\n\nBeing a rapidly growing tumor with a very high replicative activity, BL proves to be particularly sensitive to cytotoxic chemotherapeutic agents, in particular in a polychemotherapy regimen alongside rituximab, an anti-CD20 monoclonal antibody responsible of a further increase of the five-years survival rate, which can amount up to 90–100% or early stages1,8,19.\n\nTherefore, since prognosis is closely related to the earliness of diagnosis, the specialists and GDPs should consider the possibility of a BL in case of rapid swelling of the jaws in young healthy individuals, especially if nonresponsive to traditional antibiotic therapy.\n\n\nConsent\n\nAn informed consent form was signed by the patient’s mother, in order to obtain permission for photo usage and for the use and publication of the young patient’s data.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nDozzo M, Carobolante F, Donisi PM, et al.: Burkitt lymphoma in adolescents and young adults: management challenges. Adolesc Health Med Ther. 2016; 8: 11–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBURKITT D: A sarcoma involving the jaws in African children. Br J Surg. 1958; 46(197): 218–23. PubMed Abstract | Publisher Full Text\n\nArdekian L, Peleg M, Samet N, et al.: Burkitt's lymphoma mimicking an acute dentoalveolar abscess. J Endod. 1996; 22(12): 697–8. PubMed Abstract | Publisher Full Text\n\nTsui SH, Wong MH, Lam WY: Burkitt's lymphoma presenting as mandibular swelling--report of a case and review of publications. Br J Oral Maxillofac Surg. 2000; 38(1): 8–11. PubMed Abstract | Publisher Full Text\n\nBanthia V, Jen A, Kacker A: Sporadic Burkitt's lymphoma of the head and neck in the pediatric population. Int J Pediatr Otorhinolaryngol. 2003; 67(1): 59–65. PubMed Abstract | Publisher Full Text\n\nJan A, Vora K, Sándor GK: Sporadic Burkitt's lymphoma of the jaws: the essentials of prompt life-saving referral and management. J Can Dent Assoc. 2005; 71(3): 165–8. PubMed Abstract\n\nPatil K, Mahima VG, Jayanth BS, et al.: Burkitt's lymphoma in an Indian girl: a case report. J Indian Soc Pedod Prev Dent. 2007; 25(4): 194–9. PubMed Abstract\n\nValenzuela-Salas B, Dean-Ferrer A, Alamillos-Granados FJ: Burkitt's lymphoma: a child's case presenting in the maxilla. Clinical and radiological aspects. Med Oral Patol Oral Cir Bucal. 2010; 15(3): e479–82. PubMed Abstract\n\nPereira CM, Lopes AP, Meneghini AJ, et al.: Burkitt's lymphoma in a young Brazilian boy. Malays J Pathol. 2010; 32(1): 59–64. PubMed Abstract\n\nBilodeau E, Galambos C, Yeung A, et al.: Sporadic Burkitt lymphoma of the jaw: case report and review of the literature. Quintessence Int. 2012; 43(4): 333–6. PubMed Abstract\n\nPadmanabhan MY, Pandey RK, Kumar A, et al.: Dental management of a pediatric patient with Burkitt lymphoma: a case report. Spec Care Dentist. 2012; 32(3): 118–23. PubMed Abstract | Publisher Full Text\n\nToader C, Toader M, Stoica A, et al.: Tonsillar lymphoma masquerading as obstructive sleep apnea - pediatric case report. Rom J Morphol Embryol. 2016; 57(2 Suppl): 885–891. PubMed Abstract\n\nUgar DA, Bozkaya S, Karaca I, et al.: Childhood craniofacial Burkitt's lymphoma presenting as maxillary swelling: report of a case and review of literature. J Dent Child (Chic). 2006; 73(1): 45–50. PubMed Abstract\n\nMbulaiteye SM, Biggar RJ Bhatia K, et al.: Sporadic childhood Burkitt lymphoma incidence in the United States during 1992-2005. Pediatr Blood Cancer. 2009; 53(3): 366–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSandlund JT, Fonseca T, Leimig T, et al.: Predominance and characteristics of Burkitt lymphoma among children with non-Hodgkin lymphoma in northeastern Brazil. Leukemia. 1997; 11(5): 743–6. PubMed Abstract | Publisher Full Text\n\nRamanathan A, Mahmoud HA, Hui LP, et al.: Oral extranodal non Hodgkin's lymphoma: series of forty two cases in Malaysia. Asian Pac J Cancer Prev. 2014; 15(4): 1633–7. PubMed Abstract | Publisher Full Text\n\nErtem U, Duru F, Pamir A, et al.: Burkitt's lymphoma in 63 Turkish children diagnosed over a 10 year period. Pediatr Hematol Oncol. 1996; 13(2): 123–34. PubMed Abstract | Publisher Full Text\n\nBi CF, Tang Y, Zhang WY, et al.: Sporadic Burkitt lymphomas of children and adolescents in Chinese: a clinicopathological study of 43 cases. Diagn Pathol. 2012; 7: 72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDunleavy K, Little RF, Wilson WH: Update on Burkitt Lymphoma. Hematol Oncol Clin North Am. 2016; 30(6): 1333–1343. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "40653",
"date": "11 Dec 2018",
"name": "Cristiana Bellan",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe Authors describe a case of Sporadic Burkitt lymphoma in an adolescent patient with an \"non conventional\" oral presentation. They well describe the imaging appearence, the morphology and immunophenotyping as well as molecular characteristics of this entity, highlinting the importance of the differential diagnosis in the oral pathology in young patients.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "54889",
"date": "18 Feb 2020",
"name": "Mahmoud Rezk Abdelwahed Hussein",
"expertise": [
"Reviewer Expertise Molecular pathology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well written paper that presents a very interesting case report that concisely describes and addresses a rare clinicopathologic entity in the realm of lymphoma, in particular oral lymphomatous proliferation. The case details were presented clearly and concisely. The relevant literature and studies were nicely summarized and addressed. All in all, I support a decision \"to accept\".\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1567
|
https://f1000research.com/articles/7-342/v1
|
20 Mar 18
|
{
"type": "Research Article",
"title": "Adolescent THC exposure does not sensitize conditioned place preferences to subthreshold d-amphetamine in male and female rats",
"authors": [
"Robin J Keeley",
"Cameron Bye",
"Jan Trow",
"Robert J McDonald",
"Cameron Bye",
"Jan Trow",
"Robert J McDonald"
],
"abstract": "The acute effects of marijuana consumption on brain physiology and behaviour are well documented, but the long-term effects of its chronic use are less well known. Chronic marijuana use during adolescence is of increased interest, given that the majority of individuals first use marijuana during this developmental stage , and adolescent marijuana use is thought to increase the susceptibility to abusing other drugs when exposed later in life. It is possible that marijuana use during critical periods in adolescence could lead to increased sensitivity to other drugs of abuse later on. To test this, we chronically administered ∆9-tetrahydrocannabinol (THC) to male and female Long-Evans (LER) and Wistar (WR) rats directly after puberty onset. Rats matured to postnatal day 90 before being exposed to a conditioned place preference task (CPP). A subthreshold dose of d-amphetamine, found not to induce place preference in drug naïve rats, was used as the unconditioned stimulus. The effect of d-amphetamine on neural activity was inferred by quantifying cfos expression in the nucleus accumbens and dorsal hippocampus following CPP training. Chronic exposure to THC post-puberty had no potentiating effect on a subthreshold dose of d-amphetamine to induce CPP. No differences in cfos expression were observed. These results show that chronic exposure to THC during puberty did not increase sensitivity to d-amphetamine in adult LER and WR rats. This supports the concept that THC may not sensitize the response to all drugs of abuse.",
"keywords": [
"THC",
"adolescence",
"d-amphetamine",
"strain",
"sex",
"conditioned place preference"
],
"content": "Introduction\n\nMarijuana is one of the most commonly used drugs of abuse worldwide1, and the psychoactive properties of marijuana are a result of the actions of ∆9-tetrahydrocannabinol (THC)2,3. Chronic marijuana use is associated with an increased risk of psychosis and depression4, and these relationships are even more concerning when use occurs during adolescence (for example, 5–7). In addition to the reported increased sensitivity of the adolescent period to the effects of marijuana, sex may also play a role in the consequences of both short- and long-term marijuana use with females more sensitive to depression and anxiety following marijuana exposure in adolescence8.\n\nIn addition to sex differences in the outcome of adolescent marijuana use, genetic background, including rat strain, can change the long-term consequences of THC exposure9,10. Rat strains vary on measures related to learning and memory11–18, anxiety18 and development10 as well as in response to drugs of abuse19–25. Given that rat strains are used interchangeably in drug abuse research despite their innate differences, the inclusion of multiple strains of rat in any one study can help determine the strength and reproducibility of the long-term consequences of marijuana.\n\nMarijuana use during adolescence may increase the likelihood of engaging in other physiologically and sociologically harmful drugs of abuse in adulthood. THC administration can potentiate the response to opioids26 and nicotine27, through the facilitation of brain reward mechanisms28,29. However, the interaction between the consumption of one drug of abuse and initiating use of another is complex, and individual differences may predict sensitivity to other drugs, including amphetamine19,23,30–32. The use of multiple rat strains, including Long-Evans (LER) and Wistar (WR) rats that have previously been observed to have differential sensitivity to THC, can model individual differences in response to THC.\n\nThis study sought to determine the long-term consequences of THC administration during the post-pubertal period in two previously studied strains of rats9. Following systemic administration of THC for 14 days after puberty onset, rats were aged to 90 days, at which point all rats were trained in a conditioned place preference (CPP) task to a subthreshold dose of d-amphetamine. It was hypothesized that if a particular strain and sex group was more sensitive to the effects of THC and if THC exposure increased the sensitivity to other drugs of abuse, sensitive rats would develop CPP to the sub-threshold dose of d-amphetamine and show increased neural activation, as inferred by protein expression of the immediate early gene, Cfos, in reward (nucleus accumbens) and context-specific (dorsal hippocampus) brain regions. However, if THC administration does not increase the sensitivity of rats to amphetamine, then no strain or sex group should show CPP behaviour in response to a subthreshold dose of d-amphetamine and no differences in Cfos expression should be observed.\n\n\nMethods\n\nSubjects. Subjects were purchased and shipped from Charles River (Semmeville, Quebec) as adults (250–300g) (LER female: N = 16; LER male: N = 24; WR female: N = 16; WR male: N = 16). All rats were housed in standard laboratory conditions (21°C and 35% relative humidity; 12D:12L) in Plexiglas tubs (46cm × 25cm × 20cm) with ad libitum access to food and water. All rat handling and procedures were done in accordance to the University of Lethbridge's Animal Welfare Committee and the Canadian Council on Animal Care guidelines.\n\nD-amphetamine doses. Drug naive adult rats were tested using three doses of d-amphetamine, 0.5mg/kg and 0.7mg/kg (0.49mg/ml d-amphetamine in saline, Sigma Aldrich). These doses were chosen as 1mg/kg of d-amphetamine has been shown to induce CPP in multiple research groups (as reviewed in 33) and was confirmed here in naïve LER male rats (Figure 1C). N = 8 for each strain, sex and drug dosage group.\n\nA. 0.5mg/kg d-amphetamine. B. 0.7mg/kg d-amphetamine. C. 1.0mg/kg d-amphetamine. Not e 0.5 and 0.7mg/kg d-amphetamine was tested in all strain and sex groups and 1mg/kg was tested only in LER males to confirm previously published work. * p < 0.05. Individual data plus mean and SEE. LER females (closed circle), LER male (closed triangle), WR female (open circle), WR male (open triangle).\n\nCPP: Apparatus and training. Apparatus – A similar apparatus and procedure to that used for discriminative appetitive34,35 and fear conditioning36,37 to context tasks were implemented here. Briefly, opaque Plexiglas contexts that differed in shape (triangle versus square), colour (black versus white) and odour (amyl acetate versus eucalyptus), were connected with a grey alleyway. Both contexts and the alleyway were placed upon a clear Plexiglas table, and underneath the table, a mirror was inclined at a 45° angle which allowed for viewing by both an observer and a video camera.\n\nTraining – Pre-exposure: Rats were placed in the grey alleyway and allowed to freely explore both contexts for 10min then returned to their home cage. Dwell time in each chamber was recorded by an observer.\n\nTraining: The context to be paired with d-amphetamine injection (paired) and the context to be paired with a saline injection (unpaired) were assigned to each rat in a counterbalanced, quasi-random fashion. For training, rats were given 6 consecutive daily exposures33, where they were given an injection of either saline or d-amphetamine then placed in one of the contexts for 30min. Injection type and context exposure alternated each day.\n\nPreference: Rats explored the contexts connected by a grey alleyway for 10min. Dwell time in both contexts was recorded.\n\nSubjects, puberty onset and drug administration. Subjects were acquired, bred and handled as previously described9,10,15. Briefly, male and female LER and WR (N = 9/strain and sex group) were obtained from Charles River (Semmeville, Quebec). Rats acclimated for 2 weeks before breeding. Pups were weaned at postnatal day 21 (p21) and placed into sex-matched pairs or triplets. N = 8 for all strain and sex groups for all experiments.\n\nPuberty onset, group assignment and injection procedures were conducted as previously described10. Puberty onset was determined using the external features of the genitalia (vaginal opening and preputial separation), which correlate with gonadal hormone changes associated with puberty38,39. On weaning day, rats were assigned to their experimental groups: handled control (CON), vehicle (VEH; 1:1:18 ethanol:cremaphor:saline) or 5mg/kg THC (THC). I.p. injection procedures and handling were conducted as previously described9. On the day of determination of puberty onset, rats were brought to a dark injection room. All rats were weighed before treatment. All rats received treatment for 14 consecutive days following determination of puberty onset. After the treatment period, rats were aged to adulthood (p90) before behavioural testing.\n\nCPP to a subthreshold Dose of d-amphetamine: Apparatus & training. From the results of Experiment 1 (see Results section), a subthreshold dose of d-amphetamine was determined to be 0.7mg/kg. This dose was used for all rats exposed to adolescent THC. Apparatus and training were conducted as described.\n\nPerfusion & fixation. Cfos protein is present in neurons that were active 20–30min after an experience40, and in rats, d-amphetamine will reach the brain within 5min of an i.p. injection and remain stable for 1hr41. Any cfos protein signal detected 1hr after d-amphetamine injection represents the neurons active 30min after d-amphetamine injection. One week after the final day of CPP, rats were injected with a single 1mg/kg dose of d-amphetamine and sat in their home cage for 1hr. Rats were euthanized with a single i.p. injection of sodium pentobarbital (120mg/kg) and transcardially perfused with approximately 150mL of 1x phosphate-buffered saline (PBS) followed by 4% paraformaldehyde (PFA) in 1xPBS. Brains were immersion fixed in 4% PFA in 1xPBS. PFA was replaced 24h after perfusion with 30% sucrose and 0.2% Na azide in 1xPBS. Brains were sectioned at 40µm using a cryostat (CM1900, Leica, Germany) and placed directly into Eppendorf tubes containing 0.2% Na azide in 1xPBS.\n\nCfos immunohistochemistry & quantification. The amount of cfos protein was stained as previously described42. Briefly, free-floating tissue was washed (1xPBS), followed by a 30min quenching step (0.3% H2O2 in 1xPBS). Tissue was blocked (1.5% goat serum in 0.3% triton-X 1xPBS) for 30min then incubated in 1° antibody (rabbit; 1:1000, 0.33% triton-X in 1xPBS with 1.5% normal goat serum; Santa Cruz, California) for 24hrs. Then, tissue was washed followed by a 24hr incubation in 2° antibody (anti-rabbit; 1:1000, Vector Labs, Canada) at room temperature. On the third day, tissue was washed then placed in AB Complex (Vector labs, Canada) for 45min. Tissue was washed then bathed for 5min in a 0.5% 3,3’-diaminobenzidine (DAB) solution (1xPBS with NiCl2-6H2O and 0.05% H2O2). Sections were washed then mounted on 1% gelatin coated slides left to dry for 24hrs, dehydrated and coverslipped with Permount.\n\nRepresentative images from NAc and dorsal hippocampus were taken and quantified using particle analysis in Image J (NIH, USA). Regions of interest were defined using the Rat Brain Atlas43, and particles were counted per unit area.\n\nVaginal cytology and the determination of estrous cycle was conducted as previously described9,10,34. Sterile Q-tips were dipped in sterile distilled water to collect samples onto standard glass slides (Vector labs, Canada). Vaginal smears were collected during all behavioural testing days and examined using brightfield microscopy on a Zeiss Axio Imager MT (Carl Zeiss, MicroImaging GmBH, Germany) using the 20X objective.\n\nAll raw data can be found in the raw dataset. Statistical tests were conducted using SPSS (IBM, ver 17), and estrous cycle phase was used as a covariate. For Experiment 1, a repeated measures ANOVA was conducted for percent dwell time in either context with strain and sex as the between subjects factors. Since we were interested in whether a preference for one context over another had occurred, a priori comparisons were conducted within each strain and sex group comparing dwell time in each context. We report partial η2 for effect size and observed power for all results.\n\nFor Experiment 2, percent dwell time in the paired and unpaired contexts on the pre-exposure and preference days were compared within strain and sex groups using drug condition (group) as a between subjects factor. A priori hypotheses were established such that within each drug group and within each strain and sex group, comparisons between the paired and unpaired contexts were always conducted. For cfos quantification, between subjects comparisons within strain and sex groups were conducted in order to determine the effects of drug exposure on a specific strain and sex group.\n\n\nResults\n\nEstrous cycle did not significantly alter any of the results and was not included as a covariate in subsequent analyses.\n\nNo initial preference nor any preference after training was observed for 0.5 (Figure 1A) or 0.7mg/kg (Figure 1B) of d-amphetamine for any strain or sex group (see Table 1 for statistical results). A dose of 1mg/kg d-amphetamine was used to confirm previous experiments and did induce significant place preference (see Figure 1C), thus 0.7mg/kg dose was considered subthreshold for all subsequent experiments.\n\nThere were no pre-existing bias to spend more time in the paired or unpaired context, regardless of strain, sex or drug administration. No interaction between drug or context were observed in any strain and sex group. On the preference day, LER females overall spent significantly more time in the paired context (F(1, 21) = 17.483, p < 0.001; Figure 2A). No overall effect of group was observed. Individual comparisons within groups revealed that CON (p = 0.04) and VEH (p = 0.028) LER females spent significantly more time in the context paired with d-amphetamine. No such difference was observed within LER females exposed to THC, although this value did approach statistical significance (p = 0.065). LER males (Figure 2B), WR females (Figure 2C) and WR males (Figure 2D) showed no significant effect of drug as well as did not show an overall preference for one context over the others (see Table 2 for statistical results).\n\nPreference for A) LER females, B) LER males, C) WR females and D) WR males. * p<0.05.\n\nNo significant effects were observed for any strain and sex group for cfos expression in dorsal hippocampus (Figure 3) and NAc (Figure 4) following a 1mg/kg injection of d-amphetamine.\n\n\nDiscussion\n\nHere, we report no long-term consequences of adolescent THC exposure on sensitivity to d-amphetamine in adulthood. We did observe that rearing environment affected sensitivity to d-amphetamine in LER females; CON LER females bred in house expressed CPP behaviour to a 0.7mg/kg dose of d-amphetamine, whereas those obtained from a commercial breeder (Charles River) did not. Additionally, using immediate early gene protein expression, we observed no significant effect of THC exposure following puberty onset in nucleus accumbens and dorsal hippocampus d-amphetamine-induced activation.\n\nAdolescent THC exposure did not potentiate the adult response to d-amphetamine. D-amphetamine increases dopaminergic tone when systemically administered44–47 and is highly rewarding48,49. Given the premise that THC acts as a gateway drug, we assumed adolescent exposure to THC would potentiate reward circuitry, enhancing sensitivity to d-amphetamine.\n\nPriming of amphetamine response by cannabinoids has been observed by some researchers50–52 and not others53. Differences between among studies look for this effect include the dose, duration and starting age of exposure to THC as well as the timing of exposure to amphetamine, which one study reported amphetamine-primed reward to be dependent on the time since exposure to THC51. However, our results should help mitigate many of these issues, as our dose of THC was relatively moderate, was given following the onset of puberty, which can be influenced by THC54 and lasted throughout the adolescent period and in to early adulthood, all of which are reasonable analogues, given experimental constraints, to the human adolescent marijuana consumption experience. One possible explanation for the pattern of results obtained in the present study may be the use of CPP versus the self-administration paradigm. CPP is a standard metric for determining the rewarding properties of drugs of abuse and has been observed for multiple doses of drugs, including amphetamine33,55–57. Future experiments should consider allowing animals to self-administer either THC or amphetamines, potentially looking at the correlations between self-administration of both drugs. Unfortunately, THC has proven problematic in self-administration paradigms in 55,58,59.\n\nPrevious studies have demonstrated priming effects of THC to other drugs of abuse. Increased self-administration of heroin or other opiates has been observed52,60–62, partially dependent on cannabinoid receptors63. Thus, the endogenous opioid system is particularly sensitive to the long-term consequences of THC. Indeed, given the increased abuse of prescription opiates, research examining the interplay between the endogenous cannabinoid and opioid systems could potentially prevent the transition of using marijuana to opiates. It is possible that priming of THC is specific to drugs targeting the opioid system.\n\nLER females bred in house at the University of Lethbridge expressed CPP to a 0.7mg/kg dose of d-amphetamine whereas those purchased from Charles River did not. This was exclusively observed in LER females. Strain differences in response to amphetamine have been observed previously19,20,22,23. This strain and sex specific effect in response to amphetamine may be the result of the interplay between the stress system and monoaminergic function, which has been posited to explain differences in response to amphetamine between two other strains of rats, Fisher 344 and Lewis64,65. It is possible that differences in these systems may occur in LER females reared under different conditions66,67. Regardless, further understanding of this fascinating effect of strain and rearing conditions should be explored as it is clear that genetic and differences in rearing correlate highly with drug abuse in adulthood21,32,68–70.\n\nStrain-dependent sex difference in response to amphetamines has been observed previously64, although this effect was not observed in LER. Differential responses in one sex and not the other across strains are not uncommon (for example, 71), however most studies examining strain differences in response to drugs of abuse typically only use males (as discussed in 64). There is a tendency for females to be more sensitive to drugs of abuse, including amphetamine30,64,72–75, which is partially mediated through the endogenous hormonal rhythms of females76–78. Here, training days covered the extent of at least one full estrous cycle, and there was no significant effect of estrous cycle phase on CPP behaviour. Thus, we have identified that LER females are sensitive to rearing environment in relation to CPP behaviour in response to amphetamine. This kind of effect should not be underestimated as the implications of ignoring sex, strain, rearing differences and their interactions in research are being increasingly recognized by granting agencies and scientific organizations to contribute to individual differences and reproducibility in current neuroscience research.\n\n\nConclusions\n\nThis study does not support a link between adolescent THC exposure and sensitivity to other drugs of abuse, where rats were tested for changes in sensitivity to d-amphetamine following long-term exposure of THC during adolescence. This is surprising, given the vulnerability of LER females to developmental perturbations (in this case, rearing environment) on d-amphetamine CPP. WR displayed stable behavioural profiles; neither rearing environment nor THC administration altered their response to a sub-threshold dose of d-amphetamine. Our previous research identified WR as resilient to the effects of adolescent THC exposure9. Further research into discovering the mechanisms behind resiliency in these groups may help identify mechanisms that can be protective for groups at-risk to the development of addiction.\n\n\nData availability\n\nDataset 1: Raw data associated with Figure 1–Figure 4. 10.5256/f1000research.14029.d19672079",
"appendix": "Competing interests\n\n\n\nThe authors declare that there are no competing interests.\n\n\nGrant information\n\nThis research was funded by a NSERC Discovery Grant awarded to RJM.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nUN Office on Drugs and Crime (UNODC): World Drug Report 2015. 2015. Reference Source\n\nMechoulam R: Marihuana chemistry. Science. 1970; 168(3936): 1159–1165. PubMed Abstract | Publisher Full Text\n\nRazdan RK: Structure-activity relationships in cannabinoids. Pharmacol Rev. 1986; 38(2): 75–149. PubMed Abstract\n\nMoore TH, Zammit S, Lingford-Hughes A, et al.: Cannabis use and risk of psychotic or affective mental health outcomes: a systematic review. Lancet. 2007; 370(9584): 319–328. PubMed Abstract | Publisher Full Text\n\nCha YM, Jones KH, Kuhn CM, et al.: Sex differences in the effects of delta9-tetrahydrocannabinol on spatial learning in adolescent and adult rats. Behav Pharmacol. 2007; 18(5–6): 563–569. PubMed Abstract | Publisher Full Text\n\nCha YM, White AM, Kuhn CM, et al.: Differential effects of delta9-THC on learning in adolescent and adult rats. Pharmacol Biochem Behav. 2006; 83(3): 448–455. PubMed Abstract | Publisher Full Text\n\nO’Shea M, Singh ME, McGregor IS, et al.: Chronic cannabinoid exposure produces lasting memory impairment and increased anxiety in adolescent but not adult rats. J Psychopharmacol. 2004; 18(4): 502–508. PubMed Abstract | Publisher Full Text\n\nPatton GC, Coffey C, Carlin JB, et al.: Cannabis use and mental health in young people: cohort study. BMJ. 2002; 325(7374): 1195–1198. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKeeley RJ, Trow J, Bye C, et al.: Part II: Strain- and sex-specific effects of adolescent exposure to THC on adult brain and behaviour: variants of learning, anxiety and volumetric estimates. Behav Brain Res. 2015; 288: 132–52. PubMed Abstract | Publisher Full Text\n\nKeeley RJ, Trow J, McDonald RJ: Strain and sex differences in puberty onset and the effects of THC administration on weight gain and brain volumes. Neuroscience. 2015; 305: 328–42. PubMed Abstract | Publisher Full Text\n\nAndrews JS, Jansen JH, Linders S, et al.: Performance of four different rat strains in the autoshaping, two-object discrimination, and swim maze tests of learning and memory. Physiol Behav. 1995; 57(4): 785–790. PubMed Abstract | Publisher Full Text\n\nHolahan MR, Honegger KS, Routtenberg A: Expansion and retraction of hippocampal mossy fibers during postweaning development: strain-specific effects of NMDA receptor blockade. Hippocampus. 2007; 17(1): 58–67. PubMed Abstract | Publisher Full Text\n\nHolahan MR, Rekart JL, Sandoval J, et al.: Spatial learning induces presynaptic structural remodeling in the hippocampal mossy fiber system of two rat strains. Hippocampus. 2006; 16(6): 560–570. PubMed Abstract | Publisher Full Text\n\nHort J, Brozek G, Komárek V, et al.: Interstrain differences in cognitive functions in rats in relation to status epilepticus. Behav Brain Res. 2000; 112(1–2): 77–83. PubMed Abstract | Publisher Full Text\n\nKeeley RJ, Bye C, Trow J, et al.: Strain and sex differences in brain and behaviour of adult rats: Learning and memory, anxiety and volumetric estimates. Behav Brain Res. 2015; 288(5): 118–31. PubMed Abstract | Publisher Full Text\n\nKeeley RJ, Wartman BC, Häusler AN, et al.: Effect of juvenile pretraining on adolescent structural hippocampal attributes as a substrate for enhanced spatial performance. Learn Mem. 2010; 17(7): 344–354. PubMed Abstract | Publisher Full Text\n\nParé WP: Enhanced retrieval of unpleasant memories influenced by shock controllability, shock sequence, and rat strain. Biol Psychiatry. 1996; 39(9): 808–813. PubMed Abstract | Publisher Full Text\n\nvan der Staay FJ, Schuurman T, van Reenen CG, et al.: Emotional reactivity and cognitive performance in aversively motivated tasks: a comparison between four rat strains. Behav Brain Funct. 2009; 5: 50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnisman H, Cygan D: Central effects of scopolamine and (+)-amphetamine on locomotor activity: interaction with strain and stress variables. Neuropharmacology. 1975; 14(11): 835–840. PubMed Abstract | Publisher Full Text\n\nCamp DM, Browman KE, Robinson TE: The effects of methamphetamine and cocaine on motor behavior and extracellular dopamine in the ventral striatum of Lewis versus Fischer 344 rats. Brain Res. 1994; 668(1–2): 180–193. PubMed Abstract | Publisher Full Text\n\nDeiana S, Fattore L, Spano MS, et al.: Strain and schedule-dependent differences in the acquisition, maintenance and extinction of intravenous cannabinoid self-administration in rats. Neuropharmacology. 2007; 52(2): 646–654. PubMed Abstract | Publisher Full Text\n\nFujimoto Y, Kitaichi K, Nakayama H, et al.: The pharmacokinetic properties of methamphetamine in rats with previous repeated exposure to methamphetamine: the differences between Long-Evans and Wistar rats. Exp Anim. 2007; 56(2): 119–129. PubMed Abstract | Publisher Full Text\n\nGeorge FR, Porrino LJ, Ritz MC, et al.: Inbred rat strain comparisons indicate different sites of action for cocaine and amphetamine locomotor stimulant effects. Psychopharmacology (Berl). 1991; 104(4): 457–462. PubMed Abstract | Publisher Full Text\n\nOnaivi ES, Maguire PA, Tsai NF, et al.: Comparison of behavioral and central BDZ binding profile in three rat lines. Pharmacol Biochem Behav. 1992; 43(3): 825–831. PubMed Abstract | Publisher Full Text\n\nOrtiz S, Oliva JM, Pérez-Rial S, et al.: Differences in basal cannabinoid CB1 receptor function in selective brain areas and vulnerability to voluntary alcohol consumption in Fawn Hooded and Wistar rats. Alcohol Alcohol. 2004; 39(4): 297–302. PubMed Abstract | Publisher Full Text\n\nFiellin LE, Tetrault JM, Becker WC, et al.: Previous use of alcohol, cigarettes, and marijuana and subsequent abuse of prescription opioids in young adults. J Adolesc Health. 2013; 52(2): 158–163. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPanlilio LV, Zanettini C, Barnes C, et al.: Prior exposure to THC increases the addictive effects of nicotine in rats. Neuropsychopharmacology. 2013; 38(7): 1198–1208. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGardner EL: Addictive potential of cannabinoids: the underlying neurobiology. Chem Phys Lipids. 2002; 121(1–2): 267–290. PubMed Abstract | Publisher Full Text\n\nGardner EL, Paredes W, Smith D, et al.: Facilitation of brain stimulation reward by delta9-tetrahydrocannabinol. Psychopharmacology (Berl). 1988; 96(1): 142–144. PubMed Abstract | Publisher Full Text\n\nKlebaur JE, Bevins RA, Segar TM, et al.: Individual differences in behavioral responses to novelty and amphetamine self-administration in male and female rats. Behav Pharmacol. 2001; 12(4): 267–275. PubMed Abstract | Publisher Full Text\n\nPiazza PV, Deminiere JM, Le Moal M, et al.: Factors that predict individual vulnerability to amphetamine self-administration. Science. 1989; 245(4925): 1511–1513. PubMed Abstract | Publisher Full Text\n\nSchenk S, Hunt T, Malovechko R, et al.: Differential effects of isolation housing on the conditioned place preference produced by cocaine and amphetamine. Pharmacol Biochem Behav. 1986; 24(6): 1793–1796. PubMed Abstract | Publisher Full Text\n\nTzschentke TM: Measuring reward with the conditioned place preference (CPP) paradigm: update of the last decade. Addict Biol. 2007; 12(3–4): 227–462. PubMed Abstract | Publisher Full Text\n\nKeeley RJ, Zelinski EL, Fehr L, et al.: The effect of exercise on carbohydrate preference in female rats. Brain Res Bull. 2014; 101: 45–50. PubMed Abstract | Publisher Full Text\n\nRalph MR, Ko CH, Antoniadis EA, et al.: The significance of circadian phase for performance on a reward-based learning task in hamsters. Behav Brain Res. 2002; 136(1): 179–184. PubMed Abstract | Publisher Full Text\n\nAntoniadis EA, McDonald RJ: Discriminative fear conditioning to context expressed by multiple measures of fear in the rat. Behav Brain Res. 1999; 101(1): 1–13. PubMed Abstract | Publisher Full Text\n\nAntoniadis EA, Ko CH, Ralph MR, et al.: Circadian rhythms, aging and memory. Behav Brain Res. 2000; 114(1–2): 221–233. PubMed Abstract | Publisher Full Text\n\nKorenbrot CC, Huhtaniemi IT, Weiner RI: Preputial separation as an external sign of pubertal development in the male rat. Biol Reprod. 1977; 17(2): 298–303. PubMed Abstract | Publisher Full Text\n\nParker CR Jr, Mahesh VB: Hormonal events surrounding the natural onset of puberty in female rats. Biol Reprod. 1976; 14(3): 347–353. PubMed Abstract | Publisher Full Text\n\nHu E, Mueller E, Oliviero S, et al.: Targeted disruption of the c-fos gene demonstrates c-fos-dependent and -independent pathways for gene expression stimulated by growth factors or oncogenes. EMBO J. 1994; 13(13): 3094–103. PubMed Abstract | Free Full Text\n\nKuhn CM, Schanberg SM: Metabolism of amphetamine after acute and chronic administration to the rat. J Pharmacol Exp Ther. 1978; 207(2): 544–554. PubMed Abstract\n\nBlum ID, Lamont EW, Rodrigues T, et al.: Isolating neural correlates of the pacemaker for food anticipation. PLoS One. 2012; 7(4): e36117. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPaxinos G, Watson C: The rat brain in stereotaxic coordinates. Academic press; 2007. Reference Source\n\nMelega WP, Williams AE, Schmitz DA, et al.: Pharmacokinetic and pharmacodynamic analysis of the actions of D-amphetamine and D-methamphetamine on the dopamine terminal. J Pharmacol Exp Ther. 1995; 274(1): 90–96. PubMed Abstract\n\nSulzer D, Maidment NT, Rayport S: Amphetamine and other weak bases act to promote reverse transport of dopamine in ventral midbrain neurons. J Neurochem. 1993; 60(2): 527–535. PubMed Abstract | Publisher Full Text\n\nSulzer D, Sonders MS, Poulsen NW, et al.: Mechanisms of neurotransmitter release by amphetamines: a review. Prog Neurobiol. 2005; 75(6): 406–433. PubMed Abstract | Publisher Full Text\n\nTaylor KM, Snyder SH: Amphetamine: differentiation by d and l isomers of behavior involving brain norepinephrine or dopamine. Science. 1970; 168(3938): 1487–1489. PubMed Abstract | Publisher Full Text\n\nPickens R, Harris WC: Self-administration of d-amphetamine by rats. Psychopharmacologia. 1968; 12(2): 158–163. PubMed Abstract | Publisher Full Text\n\nYokel RA, Wise RA: Increased lever pressing for amphetamine after pimozide in rats: implications for a dopamine theory of reward. Science. 1975; 187(4176): 547–549. PubMed Abstract | Publisher Full Text\n\nGorriti MA, Rodriguez de Fonseca F, Navarro M, et al.: Chronic (-)-delta9-tetrahydrocannabinol treatment induces sensitization to the psychomotor effects of amphetamine in rats. Eur J Pharmacol. 1999; 365(2–3): 133–142. PubMed Abstract | Publisher Full Text\n\nLamarque S, Taghzouti K, Simon H: Chronic treatment with Delta9-tetrahydrocannabinol enhances the locomotor response to amphetamine and heroin. Implications for vulnerability to drug addiction. Neuropharmacology. 2001; 41(1): 118–129. PubMed Abstract | Publisher Full Text\n\nPryor GT, Larsen FF, Husain S, et al.: Interactions of delta9-tetrahydrocannabinol with d-amphetamine, cocaine, and nicotine in rats. Pharmacol Biochem Behav. 1978; 8(3): 295–318. PubMed Abstract | Publisher Full Text\n\nArnold JC, Topple AN, Hunt GE, et al.: Effects of pre-exposure and co-administration of the cannabinoid receptor agonist CP 55,940 on behavioral sensitization to cocaine. Eur J Pharmacol. 1998; 354(1): 9–16. PubMed Abstract | Publisher Full Text\n\nWenger T, Croix D, Tramu G: The effect of chronic prepubertal administration of marihuana (delta-9-tetrahydrocannabinol) on the onset of puberty and the postpubertal reproductive functions in female rats. Biol Reprod. 1988; 39(3): 540–545. PubMed Abstract | Publisher Full Text\n\nBraida D, Iosuè S, Pegorini S, et al.: Delta9-tetrahydrocannabinol-induced conditioned place preference and intracerebroventricular self-administration in rats. Eur J Pharmacol. 2004; 506(1): 63–69. PubMed Abstract | Publisher Full Text\n\nMaldonado R, Rodriguez de Fonseca F: Cannabinoid addiction: behavioral models and neural correlates. J Neurosci. 2002; 22(9): 3326–3331. PubMed Abstract\n\nZangen A, Solinas M, Ikemoto S, et al.: Two brain sites for cannabinoid reward. J Neurosci. 2006; 26(18): 4901–4907. PubMed Abstract | Publisher Full Text\n\nTakahashi RN, Singer G: Self-administration of delta9-tetrahydrocannabinol by rats. Pharmacol Biochem Behav. 1979; 11(6): 737–740. PubMed Abstract | Publisher Full Text\n\nTanda G, Munzar P, Goldberg SR: Self-administration behavior is maintained by the psychoactive ingredient of marijuana in squirrel monkeys. Nat Neurosci. 2000; 3(11): 1073–1074. PubMed Abstract | Publisher Full Text\n\nCadoni C, Pisanu A, Solinas M, et al.: Behavioural sensitization after repeated exposure to Delta 9-tetrahydrocannabinol and cross-sensitization with morphine. Psychopharmacology (Berl). 2001; 158(3): 259–266. PubMed Abstract | Publisher Full Text\n\nEllgren M, Spano SM, Hurd YL: Adolescent cannabis exposure alters opiate intake and opioid limbic neuronal populations in adult rats. Neuropsychopharmacology. 2007; 32(3): 607–615. PubMed Abstract | Publisher Full Text\n\nVela G, Fuentes JA, Bonnin A, et al.: Perinatal exposure to delta9-tetrahydrocannabinol (delta9-THC) leads to changes in opioid-related behavioral patterns in rats. Brain Res. 1995; 680(1–2): 142–147. PubMed Abstract | Publisher Full Text\n\nLedent C, Valverde O, Cossu G, et al.: Unresponsiveness to cannabinoids and reduced addictive effects of opiates in CB1 receptor knockout mice. Science. 1999; 283(5400): 401–404. PubMed Abstract | Publisher Full Text\n\nKosten TA, Ambrosio E: HPA axis function and drug addictive behaviors: insights from studies with Lewis and Fischer 344 inbred rats. Psychoneuroendocrinology. 2002; 27(1–2): 35–69. PubMed Abstract | Publisher Full Text\n\nWu HH, Wang S: Strain differences in the chronic mild stress animal model of depression. Behav Brain Res. 2010; 213(1): 94–102. PubMed Abstract | Publisher Full Text\n\nHenry C, Kabbaj M, Simon H, et al.: Prenatal stress increases the hypothalamo-pituitary-adrenal axis response in young and adult rats. J Neuroendocrinol. 1994; 6(3): 341–5. PubMed Abstract | Publisher Full Text\n\nMuneoka K, Mikuni M, Ogawa T, et al.: Prenatal dexamethasone exposure alters brain monoamine metabolism and adrenocortical response in rat offspring. Am J Physiol. 1997; 273(5 Pt 2): R1669–1675. PubMed Abstract | Publisher Full Text\n\nMeaney MJ, Brake W, Gratton A: Environmental regulation of the development of mesolimbic dopamine systems: a neurobiological mechanism for vulnerability to drug abuse? Psychoneuroendocrinology. 2002; 27(1–2): 127–38. PubMed Abstract | Publisher Full Text\n\nStairs DJ, Bardo MT: Neurobehavioral Effects of Environmental Enrichment and Drug Abuse Vulnerability. Pharmacol Biochem Behav. 2009; 92(3): 377–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchenk S, Lacelle G, Gorman K, et al.: Cocaine self-administration in rats influenced by environmental conditions: implications for the etiology of drug abuse. Neurosci Lett. 1987; 81(1–2): 227–31. PubMed Abstract | Publisher Full Text\n\nStöhr T, Schulte Wermeling D, Weiner I, et al.: Rat strain differences in open-field behavior and the locomotor stimulating and rewarding effects of amphetamine. Pharmacol Biochem Behav. 1998; 59(4): 813–818. PubMed Abstract | Publisher Full Text\n\nBeatty WW, Holzer GA: Sex differences in stereotyped behavior in the rat. Pharmacol Biochem Behav. 1978; 9(6): 777–783. PubMed Abstract | Publisher Full Text\n\nCompton DR, Johnson KM: Effects of acute and chronic clozapine and haloperidol on in vitro release of acetylcholine and dopamine from striatum and nucleus accumbens. J Pharmacol Exp Ther. 1989; 248(2): 521–530. PubMed Abstract\n\nSavageau MM, Beatty WW: Gonadectomy and sex differences in the behavioral responses to amphetamine and apomorphine of rats. Pharmacol Biochem Behav. 1981; 14(1): 17–21. PubMed Abstract | Publisher Full Text\n\nTseng AH, Craft RM: Sex differences in antinociceptive and motoric effects of cannabinoids. Eur J Pharmacol. 2001; 430(1): 41–47. PubMed Abstract | Publisher Full Text\n\nBecker JB: Estrogen rapidly potentiates amphetamine-induced striatal dopamine release and rotational behavior during microdialysis. Neurosci Lett. 1990; 118(2): 169–171. PubMed Abstract | Publisher Full Text\n\nBecker JB, Beer ME: The influence of estrogen on nigrostriatal dopamine activity: behavioral and neurochemical evidence for both pre- and postsynaptic components. Behav Brain Res. 1986; 19(1): 27–33. PubMed Abstract | Publisher Full Text\n\nPeris J, Decambre N, Coleman-Hardee ML, et al.: Estradiol enhances behavioral sensitization to cocaine and amphetamine-stimulated striatal [3H]dopamine release. Brain Res. 1991; 566(1–2): 255–264. PubMed Abstract | Publisher Full Text\n\nKeeley RJ, Bye C, Trow J, et al.: Dataset 1 in: Adolescent THC exposure does not sensitize conditioned place preferences to subthreshold d-amphetamine in male and female rats. F1000Research. 2018. Data Source"
}
|
[
{
"id": "32404",
"date": "04 Apr 2018",
"name": "Ryan J. McLaughlin",
"expertise": [
"Reviewer Expertise Endocannabinoids",
"cannabis",
"stress",
"reward",
"sex differences"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article contributes valuable insight into issues that are commonly observed in literature regarding the impact of adolescent Δ9-tetrahydrocannabinol (THC) exposure on future drug-seeking behavior. Particularly, careful attention is directed towards the influence of sex (including estrous cycle) and animal strain on modulation of drug-seeking behavior by adolescent THC administration. These crucial factors likely contribute to conflicting findings reported on the influence of exogenous and endogenous cannabinoids on future drug consumption, and Keeley and colleagues illustrate the necessity of implementing sex and strain variables in experimental designs when addressing such research questions. As such, the authors’ findings provide a more informative story that will be useful in guiding legislation on marijuana use policies. The introduction of this article provides clear explication of hypotheses and expected results as well as suitable background information that warrants investigation of the authors’ research questions. However, there are some minor issues regarding the presentation of methods and statistical results. Additionally, there are some aspects of the discussion that could benefit from citation of supporting literature.\nMethods In the statistical analysis section, the authors indicate that effect size and statistical power would be reported for all results (pg. 5), yet these data are not provided when presenting results in text. The analytical approaches utilized by the researchers are appropriate for the data; although, brief explanations of post-hoc statistical tests should be included.\nIt is stated in the Subjects section (pg. 3), that all rats were purchased and shipped from Charles River. However, in the first paragraph of the discussion, the authors allude to an effect of rearing environment on d-amphetamine sensitivity. Specifically, Long Evans females bred in-house showed CPP to the 0.7 mg/kg dose, but this was absent in Long Evans females obtained from a commercial breeder. These data are valuable and should be included, but the analyses do not appear anywhere in the manuscript.\nInclusion of an experimental timeline would increase the clarity of the researchers’ methodological approach. It would also provide a useful model for future researchers interested in testing the effects of THC administration on the reinforcing effects of drugs of abuse.\nGiven that an observer scored dwell time in the CPP chambers, it would be beneficial to include the familiarity of the observer with animals’ drug treatment group. In other words, were the observers blinded to treatment conditions?\nThe authors state that 3 doses of D-amphetamine were used, but only 2 doses are listed (0.5 and 0.7 mg/kg). We assume that the third dose is the vehicle (0 mg/kg), but that should be explicitly listed in the Methods (see pg. 3).\n\nResults The results are clearly and concisely stated, and the provided figures summarize the data well. However, Table 1 referenced in the “Experiment 1: Determination of a subthreshold dose of d-amphetamine” section and Table 2 referenced in the “Experiment 2: CPP to a sub-threshold dose of amphetamine” section are not provided in the article (see pg. 5).\nIn the caption for Figure 1, there is a space inserted into the word “Note”. Also in this caption it is stated that mean and “SEE” are provided. Presumably this was supposed to read SEM.\nThe authors state that estrous cycle did not have any effect on the results (pg. 5). However, with only an N=8, it may simply be that there was not sufficient power to detect any differences. This should be briefly mentioned as a potential caveat.\nDiscussion On pg. 8, the authors indicate that “this study does not support a link between adolescent THC exposure and sensitivity to other drugs of abuse…”. Given that only sensitivity to d-amphetamine was assessed, it may be more reasonable to conclude from the provided results that adolescent THC exposure does not enhance sensitivity to d-amphetamine specifically.\nOn pg. 7 – 8, the authors suggest that “…given the increased abuse of prescription opiates, research examining the interplay between the endogenous cannabinoid and opioid systems could potentially prevent the transition of using marijuana to opiates.” This is an exciting prospect for future research, and some progress has been made in this regard. The authors may find it useful to include conclusions from Markos et al., (2017)1 in which cannabidiol is shown to attenuate morphine CPP in mice.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3963",
"date": "27 Sep 2018",
"name": "Robin Keeley",
"role": "Author Response",
"response": "We thank the Referees for their thought-provoking as well as useful suggestions. We have addressed all the comments raised through modifications to the manuscript. Specifically, as in response to Dr. Khokar, we have clarified where the subjects were received from. Specifically, rats purchased from Charles River were used to determine the sub-threshold dose of d-amphetamine, and the offspring of rats purchased from Charles River and bred at the University of Lethbridge were used for the experiments examining the effects of adolescent THC to the response to amphetamine. To further clarify, we have included a supplementary figure as a timeline."
}
]
},
{
"id": "32202",
"date": "25 Apr 2018",
"name": "Jibran Y. Khokhar",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis report by Keeley et al., examines the effects of adolescent THC exposure on conditioned place preference for d-amphetamine. While no effects of adolescent THC exposure were found on the CPP or the neural correlates (c-fos expression), the authors' inclusion of important factors such as strain and sex make this an important study to consider for those interested in studying the effects of developmental cannabinoid exposure on future risk for substance use. Suggested changes would help to improve the impact of this paper.\nAbstract:\n\nLast Sentence: include \"sub-threshold dose of\" before d-amphetamine\n\nIntroduction:\nThe introduction begins with mentions of psychosis, anxiety and depression, whereas the central questions being asked in this study are not related to those topics. The authors should think about setting the research question in the opening paragraph.\n\nMethods:\nSubjects: This section is unclear. Are the subjects being referred to here the parents of the rats used in the study? D-Amphetamine Doses: 1 mg/kg not listed here Statistical Analyses: Unclear whether dwell times were also compared within-animal pre- and post-training?\n\nResults:\nTables missing, but mentioned in the text.\n\nDiscussion:\nSince the authors chose to discuss psychosis in the introduction, maybe they can discuss it in the context of the null results of d-amphetamine on c-fos expression, since an accentuated response to amphetamine is something that is seen in patients with schizophrenia.\n\nAn excellent review by Chadwick and Hurd1 might help to formulate some of these thoughts.\n\nRearing environment was not one of the research objectives/hypotheses and the relevance to this discussion needs to be strengthened.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3962",
"date": "27 Sep 2018",
"name": "Robin Keeley",
"role": "Author Response",
"response": "We would like to thank the Referee for their thoughtful comments on our research. We have specifically addressed all comments that you mentioned and further clarified any statements as required. Specifically, we found your insight on the topic of schizophrenia particularly interesting. We believe that if this research study had been conducted in an animal model of schizophrenia, the results may have been significantly different.Thank you again.Best,Robin J. Keeley"
}
]
}
] | 1
|
https://f1000research.com/articles/7-342
|
https://f1000research.com/articles/7-1176/v1
|
02 Aug 18
|
{
"type": "Research Article",
"title": "Smaller clinical trials for decision making; using p-values could be costly",
"authors": [
"Nicholas Graves",
"Adrian G. Barnett",
"Edward Burn",
"David Cook",
"Adrian G. Barnett",
"Edward Burn",
"David Cook"
],
"abstract": "Background: Clinical trials might be larger than needed because arbitrary high levels of statistical confidence are sought in the results. Traditional sample size calculations ignore the marginal value of the information collected for decision making. The statistical hypothesis testing objective is misaligned with the goal of generating information necessary for decision-making. The aim of the present study was to show that a clinical trial designed to test a prior hypothesis against an arbitrary threshold of confidence may recruit too many participants, wasting scarce research dollars and exposing participants to research unnecessarily. Methods: We used data from a recent RCT powered for traditional rules of statistical significance. The data were also used for an economic analysis to show the intervention led to cost savings and improved health outcomes. Adoption represented a good investment for decision-makers. We examined the effect of reducing the trial’s sample size on the results of the statistical hypothesis-testing analysis and the conclusions that would be drawn by decision-makers reading the economic analysis. Results: As the sample size reduced it became more likely that the null hypothesis of no difference in the primary outcome between groups would fail to be rejected. For decision-makers reading the economic analysis, reducing the sample size had little effect on the conclusion about whether to adopt the intervention. There was always high probability the intervention reduced costs and improved health. Conclusions: Decision makers managing health services are largely invariant to the sample size of the primary trial and the arbitrary p-value of 0.05. If the goal is to make a good decision about whether the intervention should be adopted widely, then that could have been achieved with a much smaller trial. It is plausible that hundreds of millions of research dollars are wasted each year recruiting more participants than required for RCTs.",
"keywords": [
"decision making",
"RCT",
"sample size",
"waste in research"
],
"content": "Introduction\n\nInformed patients, thoughtful clinicians and rational health planners make decisions about the services and treatments provided using the best information available, and all decisions are made under conditions of uncertainty1,2. We examine a situation where sufficient evidence arises from a clinical trial to inform a decision about changing services before the conventional statistical stopping point for a clinical trial is reached. This paper is about the tension between the ‘precision’ and the ‘impact’ of a scientific measurement3 and how that tension might dictate the sample size of a clinical trial.\n\nImagine a new treatment is compared against the best contemporary alternative in a well conducted randomised controlled trial (RCT). The design requires 800 participants in total based on a standard sample size calculation of 5% type 1 error and 80% power. The new treatment is more efficacious, prolongs life of high quality and saves more money than it costs to implement. The evidence to support these conclusions can be seen in the data after only 200 trial participants have been recruited, but primary outcomes are not yet statistically significant. Clinical equipoise, the cornerstone of ethical treatment allocation is lost, yet the conventions of hypothesis testing and arbitrary power calculation demand a further 600 participants are recruited. The information arising from the additional 600 participants is unlikely to change the actions of a rational decision maker who wishes to adopt the new treatment. Yet scarce research funds are used up meaning opportunities to fund other research are lost, and some patients have been consented and allocated to a treatment that we could not recommend, nor would we chose for ourselves or our families.\n\nThe utility of clinical trials for those managing health services and making clinical decisions is under debate and traditional paradigms are being challenged4. The chief claim of this paper is that an RCT designed to test a hypothesis using traditional rules of inference might have more participants than required, if the goal is to make a good decision. Waste in research arises from routine use of arbitrary levels of statistical confidence5 and because the trial data are considered in isolation6. The marginal value of the information acquired for the purpose of making a good decision is not made explicit. Important information for the purpose of decision making often lies outside the clinical trial process. The plausibility of our claim is demonstrated by re-analysing a recent RCT7.\n\nFor the design of superiority trial, the aim is to have a high likelihood of sufficient evidence to confidently reject a null hypothesis that two treatments are equivalent when treatments differ by a specified difference. This difference is usually based on either clinical importance or a best guess of the true treatment effect. Inference based on this approach has two types of potential errors. A false-positive or type I error of rejecting the null hypothesis when there is no difference, with probability α. A false negative or type II error of not rejecting the null hypothesis when there is an effect, with probability β. The sample size of the trial is calculated to give an acceptable type I error rate and power (1–β), typically 0.05 for α and 0.8 to 0.9 for the power. The final analysis summarises the incompatibility between the data and the null hypothesis8. If the p-value is below the standard 5% limit the null hypothesis of no effect is rejected. A ‘statistically significant’ result is then celebrated and typically used to support a decision to make a change to health services.\n\nWe assume the objective of decision-makers who manages health services is to improve outcomes for the populations they serve. Because this challenge will be addressed with finite resources not every service or new technology can be made available for a population. Decision-makers therefore require knowledge of the health foregone from not funding services displaced by the services that are funded9. The services that are provided should generate more health benefits per dollar of cost when compared to those that are not. With this criterion satisfied the opportunity cost from the services not provided is minimised. A rational decision maker will logically follow these rules: do not adopt programmes that worsen health outcomes and increase cost; adopt programmes that improve health outcomes and decrease costs; and, when they face a situation of increased cost for increased health outcomes they prioritise programmes that provide additional health benefits for the lowest extra cost10. They will continue choosing cost-effective services until available health budgets are exhausted. An appropriate and generic measure of health benefit is the quality adjusted life year (QALY)11. While this approach does not consider how health benefits are distributed among the population there is a framework for including health inequalities in the economic assessment of health care programmes12.\n\nIn choosing a sample size for a clinical trial to evaluate a new service or technology a decision-maker will consider the uncertainty in the conclusion about how costs and health benefits change by adoption. The aim is to reduce the likelihood of making the wrong decision. They will make rational and good decisions, and they will manage uncertainty rather than demand an arbitrarily high probability of rejecting a null hypothesis. Methods are available to estimate the expected value of information and so the optimal sample size for a trial is dependent on the context specific costs and benefits of acquiring extra information13. Each decision is context dependent and the ‘one size fits all’ approach to sample size calculation is arbitrary and potentially wasteful. This holistic approach should be a priority for designing, monitoring and analysing clinical trials.\n\n\nMethods\n\nA case study to illustrate the differing evidential requirements of the ‘hypothesis-testing’ and ‘decision-making’ approaches is provided by the RCT of the Tobacco, Exercise and Diet Messages (TEXT ME) intervention14. This health services program targeted multiple influential risk factors in patients with coronary heart disease, with SMS text messages. Advice and motivation was provided to improve health behaviours and it was supplementary to usual care. The hypothesis was that the intervention would lower plasma low-density lipoprotein cholesterol by 4.5 mg/dL at 6 months for participants compared with those receiving usual care15. The required sample size was 704 participants for 90% power15 and the trial recruited and randomised 710 participants7. The mean difference between the intervention and control group was –5 mg/dL, (95% CI –9 to 0 mg/dL). With a p-value of 0.04, the null hypothesis was rejected. Evidence for health effects were also sought on other biomedical and behavioural risk factors, quality of life, primary care use and re-hospitalisations. Clinically and statistically significant effects were also found for systolic blood pressure (mean difference –8 mmHg, p<0.001), body mass index (–1.3 kg/m2, p<0.001) and current smoking (relative risk of 0.61, p<0.001).\n\nThe TEXT ME trial data were used to inform an economic evaluation of the potential change to costs and health benefits measured in quality adjusted life years to the community from a decision to adopt the programme16. The observed differences in low-density lipoprotein cholesterol, systolic blood pressure and smoking were combined with reliable external epidemiological evidence to estimate the reduction in acute coronary events, myocardial infarction and stroke and were extrapolated over the patients expected remaining life times. The costs of providing the intervention, the projected costs of the treatment of acute events and general primary care use and expected mortality were all informed by data sources external to the primary trial16. The findings revealed that TEXT ME was certainly going to lead to better health outcomes and cost savings. The conclusion was that a rational decision-maker should fund and implement the TEXT ME program. Once available an informed clinician would then recommend TEXT ME to coronary patients, and enough patients would sign up to create benefits for individuals and the health system. Using the TEXT ME study, we consider whether the same decision could have been made at an earlier stage with fewer participants enrolled in the primary trial.\n\nWe examine the effect of a reduced sample size on the results of both the hypothesis-testing analysis for differences in low-density lipoprotein cholesterol, and the economic evaluation of the intervention. From the original 710 participants, smaller samples between 100 and 700 patients in increments of 100 were considered with the resampling done with replacement. The ‘p-value’ and ‘economic’ analyses were re-run using the data provided by the randomly selected patients, and this process was repeated 500 times for each sample size. The simulations and figures were created using R (version 3.1.0). The code is available on GitHub https://github.com/agbarnett/smaller.trials but we are unable to share the primary data from the TEXT ME RCT.\n\n\nResults\n\nThe effect of reducing the sample size for hypothesis-testing objectives was to simulate studies that traditional hypothesis testing approaches would deem underpowered, see Figure 1.\n\nThe dotted horizontal line is the standard 5% threshold. The boxes are the 25th and 75th percentiles with the median as the central line. The upper whisker extends from the third quartile to the largest value no further than 1.5 * IQR from the quartile (where IQR is the inter-quartile range). The lower whisker extends from the 1st quartile to the smallest value at most 1.5 * IQR of the quartile. Data beyond the end of the whiskers are called ‘outlying’ points and are plotted individually.\n\nOnly for a sample size of 500 participants or more would the majority of trials find a statistically significant difference in average low-density lipoprotein cholesterol between groups (Figure 1). Even at a sample size of 700 around 30% of trials would be expected to make the ‘wrong’ inference of not rejecting the null hypothesis. This is consistent with a priori analytic estimates of sample size to address the hypothesis.\n\nTo inform decision making using cost-effectiveness as the criterion, reducing the sample size has little effect on the conclusion of whether to fund, recommend and participate in TEXT ME, see Figure 2. For every simulation for each sample size the decision to adopt TEXT ME led to cost savings shown on the y-axis and gains to health, measured by QALYs shown on the x-axis.\n\nThe x-axis shows the QALY gains for TEXT ME over usual care, and the y-axis shows the cost savings.\n\nA sample size of 100 or more in the primary trial would convince a risk neutral and rational decision maker that TEXT ME is both cost-saving and health improving, and so should be adopted. The imprecision surrounding this inference increases as the sample size reduces, but the decision-making inference does not change. If the goal is to make a good decision about whether TEXT ME should be adopted widely, then that could have been achieved with a much smaller trial, one that enrolled as few as 100 patients. This would have been a cheaper and quicker research project releasing scarce research dollars for other important projects.\n\n\nDiscussion\n\nRCTs have become “massive bureaucratic and corporate enterprises, demanding costly infrastructure for research design, patient care, record keeping, ethical review, and statistical analysis”17. A single phase 3 RCT could today cost $30 million or more18 and take several years from inception to finalisation. These trials are powered for arbitrary rules of statistical significance. Critics of this approach3 argue “that some of the sciences have made a mistake, by basing decisions on statistical significance” and that “in daily use it produces unchecked a loss of jobs, justice, profit, and even life”. The mistake made by the so called ‘sizeless scientist’ is to favour ‘Precision’ over ‘Oomph’. A ‘sizeless scientist’ is more interested in how precisely an outcome is estimated and less interested in the size of the implications for society or health services of any observed change in the outcome. They do not appear interested in the facts that “significant does not mean important and insignificant does not mean unimportant”. Even experts in statistics have been shown to interpret evidence poorly, based on whether the p-value crosses the threshold of 5% for statistical significance19.\n\nResearchers are calling for a shift towards applied research designed for decision making20. Patients, clinicians and payers of health care are interested in whether some novel treatment or health programme should be adopted over the alternatives. There are many choices to be evaluated and many useful clinical trials to be undertaken, yet research budgets to support this are insufficient21. Funding a larger number of smaller trials to enable correct decisions about how to organise health services more frequently is a sensible goal.\n\nA hypothesis-testing approach maintains that a uniform level of certainty around these decisions is desirable, and needed by all stakeholders: managers, clinicians and patients. Yet the costs and benefits of every decision made are context-specific. Striving to eliminate uncertainty is likely to be inefficient use of research funding, where the benefit of achieving a given level of certainty is low or the prescribed precision unnecessary. Decision-making should address the costs and benefits throughout the life cycle of an intervention22, with consideration of whether decisions could be made based on current evidence and whether additional research needs to be undertaken23. Other considerations for decision making under conditions of uncertainty have been established and reviewed in detail24. Our observations contradict advice by Nagendran et al.25 who suggest researchers aim to “conduct studies that are larger and properly powered to detect modest effects”. This approach promotes using p-values for decision making without a more encompassing evaluation of all outcomes that are relevant for decision-making.\n\nWe suggest the decision making approach to sample size calculation would often lead to smaller trials, but not always. If rare adverse events had a substantial impact on cost and health outcomes the trial may be larger than a hypothesis testing trial powered for a single outcome, which was not the adverse event. This may especially be the case for trials of new drugs. There are some good arguments against smaller trials. A large trial with lots of data might help future proof an adoption decision. If costs, frequencies of adverse events or baseline risks change over time then a large trial might render sufficient information to defend the adoption decision in the future as compared to a small trial. There might also not be another opportunity to run an RCT, for ethical or funding reasons, and so gathering a lot of data when the chance arises could be wise. Smaller trials, despite being well designed, might find a positive result that overestimates the real effect26. This may have happened with our example of TEXT ME and a more conservative estimate of the intervention effect would likely come from a meta-analysis or repeated trial. Indeed Prasad et al.27 found from 2,044 articles published over 10 years in a leading medical journal, 1,344 were about a medical practice, 363 of them tested an established medical practice and for 146 (40%) the finding was that practice was no better or worse than the comparator implying a reversal of practice. Those who deliver health services are unlikely to be rational and risk neutral. There is often scepticism and inertia when a change to practice is suggested and some clinicians will only change when evidence is overwhelming. Lau et al.28 did a cumulative meta-analysis of intravenous streptokinase for acute myocardial infarction with mortality as the primary outcome. They showed the probability the treatment reduced mortality was greater than 97.5% by 1973 after 2,432 patients had been enrolled in eight trials. By 1977, after 4,084 patients had been enrolled in thirteen trials the probability the treatment was effective was more than 99.5%. By 1988, 36 trials had been completed with 36,974 patients included confirming the previous conclusion.\n\nOur case study demonstrates - for a single carefully conducted trial - that more information might have been collected than was necessary for a good decision to be made about a decision to adopt the intervention. We did not cherry pick this trial, but selected it because it was a recent economic analysis and had broad implications for health. The differences in necessary sample sizes and evidence will depend on context and design of trials. It might often be that smaller and so faster and cheaper trials are sufficient for good decision-making. This would release scarce research dollars that funding bodies could use for other valuable projects. Our approach is part of the drive toward increasing the value of health and medical research, which currently has a poor return with an estimated 85% of investment wasted29. Further, as adaptive trials gain traction, decision based designs provide flexibility, facilitating faster evolution of implementable findings.\n\n\nData availability\n\nThe datasets used and/or analysed for the TEXT ME trial are not publicly available due to data sharing not being approved by the local ethics committee. To access the data, the corresponding author of the primary trial should be contacted (cchow@georgeinstitute.org.au).\n\nA random sample of the TEXT ME clinical trial data that has similar features to the TEXT ME data is provided in the code used to create the simulations and figures, which is available on GitHub: https://github.com/agbarnett/trials.smaller\n\nArchived code as at time of publication: http://doi.org/10.5281/zenodo.132245930\n\nDataset 1: Data used for a simulation of Figure 2. DOI, 10.5256/f1000research.15522.d21237731",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe TEXT ME trial was supported by peer-reviewed grants from the National Heart Foundation of Australia Grant-in-Aid (G10S5110) and a BUPA Foundation Grant. We acknowledge the team who designed and conducted the TEXT ME trial and allowed us to re-analyse the data for the purpose of this paper: Clara Chow, Julie Redfern, Graham Hillis, Aravinda Thiagalingam, Stephen Jan, Maree Hackett, Robyn Whittaker. They did not provide editorial input or endorsement. The TEXT ME trial was administered by The George Institute for Global Health, Sydney Medical School, University of Sydney, Sydney, Australia.\n\nRegarding the extra activity for this paper the authors declare that no grants were involved in supporting this work.\n\n\nReferences\n\nHunink MM, Weinstein MC, Wittenberg E, et al.: Decision making in health and medicine: integrating evidence and values. Cambridge University Press; 2014. Reference Source\n\nTversky A, Kahneman D: The framing of decisions and the psychology of choice. Science. 1982; 211(4481): 453–8. PubMed Abstract | Publisher Full Text\n\nZiliak S, McCloskey D: The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, MI.: The University of Michigan Press; 2008. Publisher Full Text\n\nWoodcock J, Ware JH, Miller PW, et al.: Clinical Trials Series. N Engl J Med. 2016; 374(22): 2167. Publisher Full Text\n\nClaxton K: The irrelevance of inference: a decision-making approach to the stochastic evaluation of health care technologies. J Health Econ. 1999; 18(3): 341–64. PubMed Abstract | Publisher Full Text\n\nGoodman SN: Toward evidence-based medical statistics. 1: The P value fallacy. Ann Intern Med. 1999; 130(12): 995–1004. PubMed Abstract | Publisher Full Text\n\nChow CK, Redfern J, Hillis GS, et al.: Effect of Lifestyle-Focused Text Messaging on Risk Factor Modification in Patients With Coronary Heart Disease: A Randomized Clinical Trial. JAMA. 2015; 314(12): 1255–63. PubMed Abstract | Publisher Full Text\n\nWasserstein RL, Lazar NA: The ASA's statement on p-values: context, process, and purpose. Am Stat. 2016; 70(2): 129–33. Publisher Full Text\n\nClaxton K, Palmer S, Longworth L, et al.: A Comprehensive Algorithm for Approval of Health Technologies With, Without, or Only in Research: The Key Principles for Informing Coverage Decisions. Value Health. 2016; 19(6): 885–91. PubMed Abstract | Publisher Full Text\n\nPhelps CE, Mushlin AI: On the (near) equivalence of cost-effectiveness and cost-benefit analyses. Int J Technol Assess Health Care. 1991; 7(1): 12–21. PubMed Abstract | Publisher Full Text\n\nTorrance GW: Measurement of health state utilities for economic appraisal. J Health Econ. 1986; 5(1): 1–30. PubMed Abstract | Publisher Full Text\n\nAsaria M, Griffin S, Cookson R: Distributional Cost-Effectiveness Analysis: A Tutorial. Med Decis Making. 2016; 36(1): 8–19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClaxton K: Bayesian approaches to the value of information: implications for the regulation of new pharmaceuticals. Health Econ. 1999; 8(3): 269–74. PubMed Abstract | Publisher Full Text\n\nRedfern J, Thiagalingam A, Jan S, et al.: Development of a set of mobile phone text messages designed for prevention of recurrent cardiovascular events. Eur J Prev Cardiol. 2014; 21(4): 492–9. PubMed Abstract | Publisher Full Text\n\nChow CK, Redfern J, Thiagalingam A, et al.: Design and rationale of the tobacco, exercise and diet messages (TEXT ME) trial of a text message-based intervention for ongoing prevention of cardiovascular disease in people with coronary disease: a randomised controlled trial protocol. BMJ Open. 2012; 2(1): e000606. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurn E, Nghiem S, Jan S, et al.: Cost-effectiveness of a text message programme for the prevention of recurrent cardiovascular events. Heart. 2017; 103(12): 893–4. PubMed Abstract | Publisher Full Text\n\nBothwell LE, Greene JA, Podolsky SH, et al.: Assessing the Gold Standard--Lessons from the History of RCTs. N Engl J Med. 2016; 374(22): 2175–81. PubMed Abstract | Publisher Full Text\n\nSertkaya A, Birkenbach A, Berlind A, et al.: Examination of clinical trial costs and barriers for drug development: report to the Assistant Secretary of Planning and Evaluation (ASPE). Washington, DC: : Department of Health and Human Services; 2014. Reference Source\n\nMcShane BB, Gal D: Statistical Significance and the Dichotomization of Evidence. J Am Stat Assoc. 2017; 112(519): 885–95. Publisher Full Text\n\nLieu TA, Platt R: Applied Research and Development in Health Care - Time for a Frameshift. N Engl J Med. 2017; 376(8): 710–3. PubMed Abstract | Publisher Full Text\n\nVan Noorden R: UK government warned over 'catastrophic' cuts. Nature. 2010; 466(7305): 420–1. PubMed Abstract | Publisher Full Text\n\nSculpher M, Drummond M, Buxton M: The iterative use of economic evaluation as part of the process of health technology assessment. J Health Serv Res Policy. 1997; 2(1): 26–30. PubMed Abstract | Publisher Full Text\n\nSculpher MJ, Claxton K, Drummond M, et al.: Whither trial-based economic evaluation for health care decision making? Health Econ. 2006; 15(7): 677–87. PubMed Abstract | Publisher Full Text\n\nClaxton K, Palmer S, Longworth L, et al.: Informing a decision framework for when NICE should recommend the use of health technologies only in the context of an appropriately designed programme of evidence development. Health Technol Assess. 2012; 16(46): 1–323. PubMed Abstract | Publisher Full Text\n\nNagendran M, Pereira TV, Kiew G, et al.: Very large treatment effects in randomised trials as an empirical marker to indicate whether subsequent trials are necessary: meta-epidemiological assessment. BMJ. 2016; 355: i5432. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarnett AG, van der Pols JC, Dobson AJ: Regression to the mean: what it is and how to deal with it. Int J Epidemiol. 2005; 34(1): 215–20. PubMed Abstract | Publisher Full Text\n\nPrasad V, Vandross A, Toomey C, et al.: A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clin Proc. 2013; 88(8): 790–8. PubMed Abstract | Publisher Full Text\n\nLau J, Schmid CH, Chalmers TC: Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol. 1995; 48(1): 45–57; discussion 59-60. PubMed Abstract | Publisher Full Text\n\nChalmers I, Glasziou P: Avoidable waste in the production and reporting of research evidence. Lancet. 2009; 374(9683): 86–9. PubMed Abstract | Publisher Full Text\n\nBarnett A: agbarnett/smaller.trials: First release of R code for smaller clinical trials (Version v1.0). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1322459\n\nGraves N, Barnett AG, Burn E, et al.: Dataset 1 in: Smaller clinical trials for decision making; using p-values could be costly. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15522.d212377"
}
|
[
{
"id": "37678",
"date": "07 Sep 2018",
"name": "Stephen Senn",
"expertise": [
"Reviewer Expertise I am a medical statistician with many years experience in dealing with problems associated with drug development and regulation."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors propose that when a clinical trial is sought to inform practical decision-making, conventional standards of 'proof' may be too stringent and in consequence resources may be wasted. They illustrate this by simulating from a particular clinical trial, the TEXT ME trial, using progressively smaller sample sizes and suggest that a useful decision could have been made with fewer patients.\nThe general argument presented is interesting and the conclusion that trials are sometimes too big if practical decision making is the object may well be correct. In this respect, a key distinction was made just over 50 years ago by Schwartz and Lelouch1 between what they called explanatory or pragmatic approaches. In the former case 'proof' of the efficacy of a new treatment may be sought. In the latter case one may simply wish to choose the (plausibly) better of two treatments.\nHowever, unless I have misunderstood what the authors are doing (which I do not exclude but in that case they should clarify this) the simulation is not a valid proof of what they claim, even for the example chosen.\nThe problem is the following. By simulating from the particular trial results, they are simulating from a universe in which the treatment is effective. This would be true even if the results from the TEXT ME trial had not been 'significant'. It is true of any trial in which the observed results favour the intervention. To see this consider that valid statistical analyses will typically have type I error rates in excess of a chosen nominal value if the mean under the intervention is greater (assuming high values are good) than the mean in the control group in the population in question. Provided that the type I error rate is controlled when this is not the case, this is a desirable property of such tests.\nUsually, the population in question is taken to be the population of all possible randomisations of the patients. Here, the authors sampled without replacement from the population. The population from which they are sampling is the population of results in the full TEXT ME trial. However, this is a population in which on average the results were better for the intervention.\nHindsight is an exact science but those making practical healthcare decisions are involved in the quite different game of foresight and they need to know whether the decision they are about to make is a reasonable one. This requires their allowing for the possibility that the intervention is useless or even harmful. Thus a mixture of possible situations has to be considered: simulating only from the case where the intervention is beneficial is not adequate.\nIn fact the precise nature of the mixture envisaged can have a huge effect on the inferences. Recently, a number of authors have called for statistical standards of evidence to be modified in the opposite direction. For instance Benjamin et al.2 have suggested that the standard of p=0.005 should be adopted. David Colquhoun3 has proposed an even more stringent standard of P=0.001. This flows from the particular approach to Bayesian hypothesis testing which places a lump of probability on no difference between treatments. (See my blog4 for a discussion.) In my opinion, these are not good suggestions for a number of reasons, including that such prior distributions are far too informative and that these authors implicitly assume, which is far from obviously the case, that the explanatory purpose of clinical trials is more important than the pragmatic one.\nHowever, I agree entirely with the authors, that as soon as practical decision-making involving economics is involved, it is the value of information that is important. In this connection, I can recommend the work of Forster, Pertile and colleagues5,6. See also Burman et al. 7\nThus, I think to make good their claim, the authors would, at the very least, need to simulate from a universe in which the intervention was not necessarily better than the control. Unless I have misunderstood, this was not the simulation they undertook.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNo\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "3987",
"date": "27 Sep 2018",
"name": "Nicholas Graves",
"role": "Author Response",
"response": "The authors propose that when a clinical trial is sought to inform practical decision-making, conventional standards of 'proof' may be too stringent and in consequence resources may be wasted. They illustrate this by simulating from a particular clinical trial, the TEXT ME trial, using progressively smaller sample sizes and suggest that a useful decision could have been made with fewer patients. The general argument presented is interesting and the conclusion that trials are sometimes too big if practical decision making is the object may well be correct. In this respect, a key distinction was made just over 50 years ago by Schwartz and Lelouch1 between what they called explanatory or pragmatic approaches. In the former case 'proof' of the efficacy of a new treatment may be sought. In the latter case one may simply wish to choose the (plausibly) better of two treatments. RESPONSE: Thanks for flagging this interesting paper on trials and pragmatic decision making. We certainly agree with them that, “many trials would be better approached pragmatically.” We have included this paper in the discussion section. However, unless I have misunderstood what the authors are doing (which I do not exclude but in that case they should clarify this) the simulation is not a valid proof of what they claim, even for the example chosen. RESPONSE: Our aim was to illustrate the principles of this approach using a case study rather than provide “proof” that this approach is always better. We have changed the title to reflect this. The problem is the following. By simulating from the particular trial results, they are simulating from a universe in which the treatment is effective. This would be true even if the results from the TEXT ME trial had not been 'significant'. It is true of any trial in which the observed results favour the intervention. To see this consider that valid statistical analyses will typically have type I error rates in excess of a chosen nominal value if the mean under the intervention is greater (assuming high values are good) than the mean in the control group in the population in question. Provided that the type I error rate is controlled when this is not the case, this is a desirable property of such tests. RESPONSE: We agree, although our approach includes the changes to costs from implementing the TEXT ME intervention, so the mean difference also has to also be practically significant in order to recover these costs. Usually, the population in question is taken to be the population of all possible randomisations of the patients. Here, the authors sampled without replacement from the population. The population from which they are sampling is the population of results in the full TEXT ME trial. However, this is a population in which on average the results were better for the intervention. RESPONSE: We sampled with replacement. Using our approach, the group means were not always greater in the intervention group. For the primary outcome of LDL cholesterol, the mean was worse in the TEXT ME sample compared with usual care for around 22% of simulations when using the smallest sample size of 100. The mean difference in the secondary outcome of systolic blood pressure was stronger in the original data, and in simulations the mean in the TEXT ME sample was always lower (better) compared with the usual care group. Hindsight is an exact science but those making practical healthcare decisions are involved in the quite different game of foresight and they need to know whether the decision they are about to make is a reasonable one. This requires their allowing for the possibility that the intervention is useless or even harmful. Thus a mixture of possible situations has to be considered: simulating only from the case where the intervention is beneficial is not adequate. RESPONSE: Our aim is to provide results that are useful for decision makers, including estimates of uncertainty about the decision. In fact the precise nature of the mixture envisaged can have a huge effect on the inferences. Recently, a number of authors have called for statistical standards of evidence to be modified in the opposite direction. For instance Benjamin et al.2 have suggested that the standard of p=0.005 should be adopted. David Colquhoun3 has proposed an even more stringent standard of P=0.001. This flows from the particular approach to Bayesian hypothesis testing which places a lump of probability on no difference between treatments. (See my blog4 for a discussion.) In my opinion, these are not good suggestions for a number of reasons, including that such prior distributions are far too informative and that these authors implicitly assume, which is far from obviously the case, that the explanatory purpose of clinical trials is more important than the pragmatic one. RESPONSE: We agree and the tension of the explanatory versus pragmatic trial is a key motivation for this paper. These adjustments to the use of the p-value remain focused on the p-value and it how can inform decisions. These adjustments have been motivated by prior abuses and misinterpretations of the p-value, which is a prosaic statistic. Our approach aims to give decision makers, working under conditions of scarce resources, more meaningful statistics regarding changes to costs and health benefits. However, I agree entirely with the authors, that as soon as practical decision-making involving economics is involved, it is the value of information that is important. In this connection, I can recommend the work of Forster, Pertile and colleagues5,6. See also Burman et al. 7 RESPONSE: Thanks for flagging these interesting papers. Thus, I think to make good their claim, the authors would, at the very least, need to simulate from a universe in which the intervention was not necessarily better than the control. Unless I have misunderstood, this was not the simulation they undertook. RESPONSE: We have added just such a simulation, which shows that when there’s no treatment benefit there is a positive cost from the intervention that is not outweighed by any quality of life benefit. The cost-effectiveness plot shows clear evidence against adopting the intervention. We have added the methods and results for this new simulation and include Figure 3. References 1. Schwartz D, Lellouch J: Explanatory and pragmatic attitudes in therapeutical trials.J Chronic Dis. 1967; 20 (8): 637-48 PubMed Abstract 2. Benjamin D, Berger J, Johannesson M, Nosek B, Wagenmakers E, Berk R, Bollen K, Brembs B, Brown L, Camerer C, Cesarini D, Chambers C, Clyde M, Cook T, De Boeck P, Dienes Z, Dreber A, Easwaran K, Efferson C, Fehr E, Fidler F, Field A, Forster M, George E, Gonzalez R, Goodman S, Green E, Green D, Greenwald A, Hadfield J, Hedges L, Held L, Hua Ho T, Hoijtink H, Hruschka D, Imai K, Imbens G, Ioannidis J, Jeon M, Jones J, Kirchler M, Laibson D, List J, Little R, Lupia A, Machery E, Maxwell S, McCarthy M, Moore D, Morgan S, Munafó M, Nakagawa S, Nyhan B, Parker T, Pericchi L, Perugini M, Rouder J, Rousseau J, Savalei V, Schönbrodt F, Sellke T, Sinclair B, Tingley D, Van Zandt T, Vazire S, Watts D, Winship C, Wolpert R, Xie Y, Young C, Zinman J, Johnson V: Redefine statistical significance. Nature Human Behaviour. 2018; 2 (1): 6-10 Publisher Full Text 3. Colquhoun D: An investigation of the false discovery rate and the misinterpretation of p-values.R Soc Open Sci. 2014; 1 (3): 140216 PubMed Abstract | Publisher Full Text 4. Senn SJ: Double Jeopardy: Judge Jeffreys upholds the law. 2015. Reference Source 5. Pertile P, Forster M, Torre D: Optimal Bayesian sequential sampling rules for the economic evaluation of health technologies. Journal of the Royal Statistical Society: Series A (Statistics in Society). 2014; 177 (2): 419-438 Publisher Full Text 6. Jobjörnsson S, Forster M, Pertile P, Burman CF: Late-stage pharmaceutical R&D and pricing policies under two-stage regulation. J Health Econ. 2016; 50: 298-311 PubMed Abstract | Publisher Full Text 7. Burman C-F: Decision Analysis in Drug Development. In: Dmitrienko A, Chuang-Stein C, Agostino R, eds. Pharmaceutical Statistics Using SAS: A Practical Guide.Cary: SAS Institute. 2007. 385-428"
}
]
}
] | 1
|
https://f1000research.com/articles/7-1176
|
https://f1000research.com/articles/7-1559/v1
|
26 Sep 18
|
{
"type": "Research Article",
"title": "A systematic approach to mapping longitudinal data usage: Reflections on tracking Millennium Cohort Study activity",
"authors": [
"Dylan Kneale",
"Praveetha Patalay",
"James Thomas",
"Meena Khatwa",
"Claire Stansfield",
"Emla Fitzsimons",
"Praveetha Patalay",
"James Thomas",
"Meena Khatwa",
"Claire Stansfield"
],
"abstract": "Background: The Millennium Cohort Study is the youngest of the UK’s four national birth cohort studies, but the only study (to our knowledge) where a systematic approach to exploring data usage has been undertaken. Methods: In this paper we: (i) explore previous exercises and provide justification for our approach; (ii) share headline findings of our research, (iii) outline the challenges of intersecting systematic review methods with survey design methods; and (iv) discuss the implications for future survey design as well as for future exercises tracking survey data usage. All of the results were obtained through undertaking systematic searches across 30 databases which generated over 4000 results. We then searched these records, first on title and abstract and then on the full text and extracted data on studies that fell within our specific areas of interest. Results: A total of 481 studies were identified as using MCS data in novel analyses. Among these studies, measures that have been collected across sweeps—diet, BMI, SDQ and screen time—are all comparatively well used. Data that were collected from the child’s own reports (e.g. friendships and feelings) have seldom been utilised in comparison to data collected through parental reports and using validated tools (e.g. SDQ). Imposing thresholds on data was found to be problematic in some cases, for example for BMI, where a number of different thresholds for overweight and obesity were in use. The use of different thresholds can lead to substantial differences in the results obtained. Conclusions: Longitudinal consistency in measures is key to identifying change over time, and the review helped map the degree of consistency in measures, and their utility. The findings shaped decisions around inclusion of variables in MCS7 (age 17 years), as well as the way in which existing data were deposited.",
"keywords": [
"cohort studies",
"Millennium Cohort Study",
"systematic review",
"longitudinal studies",
"child development",
"child health"
],
"content": "Introduction\n\nThe UK is home to a number of nationally and geographically representative longitudinal studies that track human development across the life course. Fielding effective instruments to respondents is essential to ensure their continuation. For longitudinal studies, ensuring that respondent burden is kept to a minimum and that respondents feel valued members of the study are particularly important considerations to encourage continued participation. Alongside respondent burden, perhaps the most significant limiting factor is the cost of fielding a question to a large sample of respondents. Metrics of fielding a successful sweep of a longitudinal survey are not formally established but are likely to include measures such as response rates, attrition and representativeness, costs, and whether the data gathered was suitable for addressing pertinent research questions. While proxy indicators are relatively well established for examining the suitability of instruments for respondents (Krosnick, 1999; Presser et al., 2004), less attention has been focussed on how to measure whether questions and data are suitable for meeting researcher and user needs. Suitable data for data users and researchers can only have been collected through questions that elicited reliable responses from respondents; however, not all reliable data will necessarily meet the needs of researchers and conversely researchers may use poorer quality or proxy measures in the absence of measures that directly support their needs. Some data may simply fall ‘under the radar’ of researchers, including those instances where the data are of little research value or policy relevance.\n\nGiven the substantial costs in fielding questions, as well as the ethical considerations of collecting but not using data from respondents, there is a need to explore how we can better understand patterns of data usage. In this paper we aim to report on our experience of measuring data usage from the UK’s youngest nationally representative birth cohort study, the Millennium Cohort Study (MCS), and report on some of the methodological choices and issues we encountered, present an overview of our findings, and discuss the implications of our findings and methods for similar exercises and data usage in the future. Here we report on a novel substantive focus for techniques used in systematic mapping and discuss their suitability for exploring survey data usage.\n\nReviewing the contribution of longitudinal survey data is not a new science and a body of literature is emerging that summarises the findings of longitudinal studies. These forms of enquiry can be divided into three categories. The first category includes cohort profiles that describe the development of longitudinal data sources and showcase their main objectives, strengths and findings of studies and describe the breadth of data collected; they may also review some of the main contributions to knowledge that these studies have offered (see Connelly & Platt, 2014 for an example). While not always explicitly stated, an underlying motivation of these cohort profiles is to publicise the existence and encourage the usage of the data through advising potential (and existing) users of the content, design and access to the data (Joshi & Fitzsimons, 2016; Power & Elliott, 2006). A second group of studies examines the contribution of different longitudinal studies to advancing knowledge around a given topic or research question; an example includes Joshi’s (2014) non-systematic literature review, which examines findings from the 1958- and 1970-born cohort studies around non-cognitive development among children as part of a study examining the intergenerational transmission of social advantage, which was later extended to other longitudinal sources (Joshi et al., 2016). As well as selectively reviewing substantive contributions of longitudinal data, one of the latent objectives of this form of review is to support users of longitudinal data in designing their own studies through highlighting research gaps and potential approaches that could be adopted to address these (for example Corden & Millar, 2007; Joshi, 2014). A third subset of studies utilises systematic review techniques to examine a tightly defined subject or question based on studies published from longitudinal data. An example using MCS data is Twamley & colleagues’ (2013) review of the evidence of how the involvement of fathers influences child and maternal mental health during early years. The prime aim of this form of study is to make a substantive contribution to the body of evidence through utilising systematic review techniques. Commonly the synthesis involves narrative synthesis of the results of studies, but evaluating the utility of different variables is often only a secondary consideration. In this paper we report on a fourth approach to reviewing longitudinal data usage, where the aim is to utilise a systematic approach to reviewing the literature, and to apply this to appraise the utility of different question areas and scales in the MCS.\n\nThe MCS is a longitudinal interdisciplinary study following the lives of just over 19,500 children born in the UK in 2000/2001. The study recruited families of children born in randomly selected electoral wards, disproportionally stratified to boost representation in England of children from disadvantaged and ethnic minority families; and with oversamples also from Scotland, Wales and Northern Ireland. Information has been collected at 9 months, 3, 5, 7, 11 and 14 years, with the next sweep of data collection at age 17 years being fielded at the time of writing. Over the course of the first two waves, approximately 19,000 households were recruited into the study; by age 7 the number of participating families had dropped to 13,800 and at age 14, just under 11,800 families were contacted. A wide range of data have been collected from children, parents and guardians, the partners of parents/guardians, older siblings and teachers, as well as sub-studies that collected data from health visitors; these include self-reported and objectively measured/verified as well as linked data from administrative records. The remainder of the paper focusses on the methods we used and our overall substantive and methodological learning from applying this approach to studying the MCS. The work presented in this paper is based on a previously published, non-peer-reviewed report available from the EPPI Centre website (http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3502) (Kneale et al., 2016).\n\n\nMethods\n\nSystematic reviewing involves conducting a through an explicit, rigorous and accountable process of discovery, description and assessment of literature according to defined criteria, followed by a synthesis of the cumulative evidence around a given condition or intervention (Gough et al., 2012), with methods developed for statistical synthesis (meta-analysis) of the evidence across studies (Borenstein et al., 2011). As an often undefined stage of systematic reviewing, but also as an independent exercise in its own right, producing a systematic map of the literature involves summarising the topography of the evidence landscape around a given issue. Systematic mapping can be considered a more appropriate research tool in the presence of a broad research question and can be used to develop a narrower research question for a systematic review. Producing a systematic map of the literature follows many of the same stages as a systematic review in the formation of a research question, identification and clarification of key concepts for use in the search strategy and in defining the inclusion criteria, and some degree of data extraction. However, a systematic map may differ in the rigidity of the inclusion/exclusion criteria employed (for example, greater inclusivity in research design), in the narrative synthesis methods employed to summarise the map rather than to address tightly defined research questions, and in the absence of formal method of quality assessment.\n\nOne of the first stages for producing a systematic map of MCS data usage was to clarify the aims. Clearly the MCS is a large study, with over 3,500 variables deposited in the age 11 standard dataset, for instance. In addition, there were already indications that a large body of evidence had accumulated. The Centre for Longitudinal Studies (CLS) has maintained a bibliography of cohort study publications that is populated through user notification and supplementary web searching. It represented a bibliography of publications, as opposed to studies, meaning that the same study could appear multiple times, for example as a conference paper, working paper and journal article. Furthermore, not all the included studies directly reported on new empirical analyses of the MCS and some were reports of MCS analyses published in other papers. Therefore, one of the aims of this work was to establish the number of unique studies that were reporting on primary analyses of MCS data. However, in order to further understand patterns of data usage and to inform the design of future sweeps of the MCS and other child cohort studies, there was a need to (i) identify where potentially under-explored areas of data may lie for MCS users, and (ii) highlight examples where detailed response categories are rarely used. Mapping out the totality of MCS data usage and meeting these objectives was an undertaking beyond the scope of the project and priority areas of research were identified based on specific topic areas, questions or scales. This meant that the study would be able to identify the total number of studies using MCS data through systematic methods, but that a systematic map of how all MCS data are used across different topic areas was focussed only on a core set of questions (see Table 1 for a list of measures identified).\n\nThe ten areas for in-depth mapping were selected to represent:\n\n(i) Allied topics/scales where usage could be contrasted by the type of scale used (e.g. Strengths and Difficulties Questionnaire (SDQ) and Child Social Behaviour Questionnaire (CBQ) for dimensions of child behaviour);\n\n(ii) Topics where usage could be contrasted in terms of respondent (parent/teacher reports (SDQ; CBQ) vs child report (feelings, school dis/like, friends));\n\n(iii) Allied topics where usage could be contrasted in terms of whether they are usually specified as outcomes or as antecedents (outcomes (e.g. Body Mass Index (BMI)) and antecedents (e.g. diet, screen time));\n\n(iv) Topics of high policy relevance (arguably all fall within this category, but immunisation was selected as representative here).\n\nThis meant that some important areas, notably cognitive development, were sacrificed in order to conduct a more thorough examination of these chosen constructs.\n\nOur strategy was first to systematically identify MCS studies through implementing a search across databases, and then secondly to search within these studies for those that focussed on subject areas in Table 1 using specialist systematic review software (EPPI-Reviewer 4 (see Thomas et al., 2010)). We tested a search strategy that was based on variants of MCS and was implemented across a number of datasets. For an indication of the comprehensiveness of the search, we were able to compare our results against the CLS bibliography. Specifically, we tested whether a simple search based on ‘Millennium Cohort Study’ and variants (see Supplementary File 1) would be sufficient to capture studies or whether a more in-depth search strategy was necessary. We conducted preliminary searches based on the simpler set of search terms in Supplementary File 1 and compared these to a snapshot of 60 publications in the CLS bibliography (approximately 15% of records held for CLS publications).\n\nOf these 60 studies, 14 were identified as problematic as they did not appear in our initial set of studies. When we examined these records further we found that six would not meet our inclusion criteria as they did not use MCS data directly but instead reported on the results of MCS data published elsewhere (see details in Kneale et al., 2016). Of the remaining eight studies identified, we found that a search that included the terms in Supplementary File 1, which looked for their occurrence anywhere in the document (as opposed to title and abstract only) and implemented through Scopus and Science Direct, located seven of these studies. The remaining study was a CLS working paper and was not indexed in these sources; as a result, CLS working papers were added as a specific source. This testing was used to justify our approach of implementing a small search across a large number of databases to locate studies using MCS data, and then to screen the results for inclusion across any one of the chosen subject areas. One deviation from this was in our search for economic literature, where the search on EconLit was expanded to include terms reflecting ‘birth cohort’ (and UK geography) as well as those in Supplementary File 1; this did not yield additional results after screening. Therefore, a simple search strategy conducted across a wide range of sources (29 in total (see details in Kneale et al., 2016)) was deemed to be an efficient way of identifying studies using MCS data, albeit with the caveats outlined in the conclusion.\n\nAll records were inputted into EPPI-Reviewer 4 for further screening (4,329 records). Records were first screened for duplicates, with just under half of records identified as duplicates and excluded (2,056 records). All remaining records were screened on the basis of title and abstract by two reviewers (DK and MK); any disagreements that could not be resolved were to be referred to other team members (although this did not prove to be necessary). Initial title and abstract screening mainly focussed on whether the data being used were MCS data. This involved excluding studies using data from a US-based Millennium Cohort Study (a study of military veterans) and the Gateshead Millennium Cohort Study. Studies that used MCS data, but clearly were not using the variables in Table 1 were excluded but marked separately from others (and rescreened) in order that we could accurately obtain a complete list of MCS studies. Full texts of records that were deemed to be using MCS data and were focussed on any one of the variables in Table 1 on the basis of title and abstract were retrieved and subject to a second round of full text screening by two reviewers (DK and MK). Both reviewers used a list of questions and potential synonyms for the terms used in questions to establish eligibility. We retrieved the full record for 224 publications to examine their relevance at this stage.\n\nStudies were deemed eligible for in-depth analyses if they used MCS data from one of the variables in Table 1 as a main dependent or independent variable in their analyses. ‘Main’ variable was defined on the basis of the scope of the study as outlined in the aims/objectives or research questions. Where studies did not clearly specify an independent variable of interest in the aims/objectives—for example, if the study explored which of a range of factors predicted a specific outcome of interest—then we examined whether there was a focus on the question areas of interest in the literature review or conceptual framework. We aimed to exclude studies where the question area in scope was being used only as a background control variable as we were unlikely to be able to systematically identify this occurrence across all studies. This was often made apparent in studies when parameter estimates in models were not published or discussed in the write up. Studies could be included as being relevant across multiple areas of interest.\n\nInformation was extracted on: the country and institution of the lead author; study sweep(s) of data used; other data sources analysed in study; questions used in analysis; aims/objectives of study; analytical methods used in analysis; additional study design notes; whether measures were used as outcome variable or main predictor of interest; findings/results; strengths of the data/measures; difficulties reported in using data/measures and/or study limitations; recommendations for future research/data collection; journal discipline; citations of study (based on those listed on Google Scholar). Data extraction forms were piloted first before being completed for each study. Where a reviewer was unable to populate a particular field, the advice of a second reviewer was sought. The results are presented in full elsewhere (Kneale et al., 2016), and here we focus on the summary points that represent both the substantive and methodological learning we uncovered.\n\n\nResults\n\nThe total number of unique MCS studies identified was 481. This was a higher number of records than found on the CLS Bibliography at the time of the search (481 vs 440); however, the results represented a greater volume of studies (as opposed to publications) as we did not include duplicates, and did not include reviews, reports or news of other MCS studies that did not include primary analyses of MCS data (including, for example, the review of fatherhood studies discussed earlier (Twamley et al., 2013)). We observed that a systematic approach to discovering MCS studies results in a substantially higher volume of studies being identified than was the case through methods that rely on researcher cooperation and were supplemented through non-systematic web searches. Again, a relatively simple search strategy implemented across a comprehensive range of data sources was found to yield efficient results.\n\nA number of measures that had been collected across different sweeps—diet, BMI, SDQ and screen time—were comparatively well used and featured as a focus in 11, 49, 121 and 16 studies, respectively. Those measures that started to be collected at age 7 (and first made available in 2010: hobbies, feelings; school dis/like; friends) had a substantially lower usage and each featured in a maximum of two studies; furthermore many of the studies using these data were descriptive reports published to coincide with the depositing of MCS data in data archives. Overall, Table 2 clearly shows that data that are collected through a recognised and well-validated scale with defined thresholds or cut-off points for identifying constructs of interest and/or data that can provide a unique insight into a policy-relevant issue, were those most widely used among the topics selected for in-depth mapping.\n\n**Different alternatives available; ***A (US) threshold for recommended maximum hours is available but not calculable in the data; n/a, not applicable; single informant construct. **** shows if these indicators were collected at multiple points during the first four sweeps of data collection; first repeated at age 11 shows that these indicators were only available from a single point in the first four sweeps\n\nStrengths and Difficulties Questionnaire (SDQ) data (collected in three surveys to MCS4) had by far been the most widely used of the 10 topic areas in focus here, and featured as a main independent or dependent variable in 121 studies (out of the total 481 MCS studies). Study authors identified the strengths of SDQ measures in MCS as including their repeated collection which enabled the implementation of longitudinal modelling strategies and contributed to the understanding of developmental trajectories (for example Dillenburger et al., 2015; Hartas, 2012; Jokela, 2010; Pronzato & Arnstein, 2013) and identification of some of the moderators (and mediators) of these trajectories. For example, the repeated nature of SDQ observations was used by Midouhas & colleagues (2013) to examine how trajectories of psychopathology were moderated by family-level circumstances among children with autism. Other strengths of SDQ identified by authors included that the data were collected from different informants, parents and sometimes teachers (Hartas, 2012; Kelly et al., 2013; Zilanawala et al., 2015), which allowed for a degree of validation between reports, as well as the availability of data across different SDQ domains, which allowed one example study to explore differential impacts of contextual risk factors across the different domains (Flouri et al., 2010). Almost two-fifths of studies using SDQ (39%) relied solely on the total domains score, while most other studies examined one or more subscales, often alongside the total difficulties score. We found only one example where a single question was used as the basis for analysis, in a study focussed children’s subjective well-being which used information on whether parents viewed their children as ‘often unhappy’, an item from the emotional symptoms subscale (Chanfreau et al., 2013). We also found other studies that used items from SDQ outside the SDQ scoring framework. For example, Delaney & Doyle (2012) used items from the hyperactivity/inattention scale in combination with two items from the Child Social Behaviour questionnaire to derive three factors (inhibition, compulsivity, impulsivity) in their examination of socioeconomic differentials of ‘time discounting’. The popularity of SDQ in the MCS follows its status as a recognised scale, collected at different time points and from different informants (parents, and at age 7 years, teachers), and with defined thresholds for identifying problem behaviour.\n\nIn contrast, an allied measure of child behaviour, the CBQ, was collected solely from parents’ reports and was developed as part of a longitudinal study examining the Effective Provision of Pre-school Education (EPPE) in the UK. For the CBQ, clear thresholds or cut-offs for identifying constructs of interest are not widely reported. Several of the studies identified as using CBQ data did not clearly report the exact questions that were being used, and there was even ambiguity as to how to refer to the CBQ scale in terms of nomenclature. In the absence of publishing full details of the questions used, many authors referred to a technical report from the EPPE project on the usage of the CBQ measures. However, this report in itself does not clearly provide technical guidance on how to construct measures and whether thresholds for underlying constructs should be imposed (as in the case for SDQ) (Sammons et al., 2003). Nevertheless, the CBQ was used in seven studies, and did capture some domains that would be otherwise unavailable, including, for example, self-regulation (Flouri et al., 2014).\n\nBMI data were also widely used in the literature, reflecting concerns about increasing rates of childhood obesity, which was substantiated in one paper through comparing levels in MCS with previous cohorts (Johnson et al., 2015). BMI data also shared many of the same properties as SDQ in terms of being a measure collected in a similar way across waves with defined thresholds for identifying overweight and obese children (albeit with different thresholds in use in the literature, see below), and consequently featured as a main variable of interest (either continuously or in categories) in 49 studies.\n\nUnlike BMI, data on children’s diets were utilised less frequently as the focus of a study, appearing in 11 studies. This may be due to the quality of the data, and some authors reported the need for objective measures of diet and for better measures of the frequency of consumption of different foods. This would have included collecting objective data through tools such as food diaries (Brophy et al., 2009). The lack of nuanced objective data on children’s nutritional intake was thought to undermine some of the observed associations between children’s diet and other outcomes including BMI. For example, the association between irregular breakfasting and higher BMI uncovered in Brophy & colleagues’ (2009) study may be an artefact of irregular dietary intake and compensatory snacking, although this cannot be investigated further as measures of nutritional intake are not collected. Similarly, others have highlighted that accurate measures around the frequency of intake of snacks are not collected (Sullivan & Joshi, 2008), as well as a more broadly, a detailed inventory of what the children eat, how frequently, and in what quantities (Connelly, 2011).\n\nData on immunisations at age 3 and 5 years did not feature in many publications. However, those using MCS data were highly cited. One of the unique strengths of the MCS data is that they were able to directly reflect and address the research needs of policy-makers in terms of understanding antecedents of MMR uptake. For example, the study of Pearce & colleagues (2009) examined children who had not been vaccinated against MMR, and uncovered that for around three-quarters of children this was through conscious choice, highlighting the level of misinformation around MMR combined vaccines that was prevalent at the time at which MCS data were collected. Crucially, MCS data were able to provide a unique insight into uptake of single vaccines as well as combined vaccines; these data were not readily available elsewhere (Anderberg et al., 2011).\n\nScreen time data were collected in a diffuse way across different sweeps, although data on the frequency of television viewing was collected consistently across all three sweeps of interest (age 3, 5 and 7 years). Screen time data featured as a focus in 16 studies and the MCS was viewed as one of the few studies that allowed for examination of patterns of screen entertainment while controlling for a broad range of sociodemographic factors (Griffiths et al., 2010). It is also one of the few studies that allows for longitudinal analysis of relationships between screen time and outcome measures (Parkes et al., 2013).\n\nA question that we wished to address in this research was to identify where granularity was lost in the data. That is, where detailed data are collected from respondents, but where such granularity is obscured by the need to collapse response categories to achieve a workable sample size for that category. Contrary to our expectations, we saw little evidence of granularity being ‘lost’ in this way, although this is likely a reflection of these data being underutilised. Nevertheless, two examples were identified where grouping data seemed to be somewhat problematic. The first was in terms of screen time, where data on TV viewing and computer usage were collected in bands, but where these bands did not correspond to the American Academy of Paediatrics (AAP) recommendation1 that screen time be limited to 1–2 hours per day. This meant that authors were not directly able to measure whether MCS children exceeded the AAP limits, although some did attempt to impose thresholds regardless. A further potential mismatch between the recommended thresholds is also observed to some extent in the case of fruit consumption, where data are collected on the number of fruit portions consumed, but the UK guidance around minimum consumption refers to fruit and vegetable consumption (NHS Choices, 2015). Therefore, it was not possible to measure whether MCS children were consuming the recommended number of portions of fruit or vegetables per day.\n\nThe second example where grouping data were found to be problematic was in the case of BMI, where a number of different thresholds for overweight and obesity were in use. The use of different thresholds can lead to substantial differences in the proportions of children classified as overweight/obese; for example, a Colombian study of children aged 5–18 years found differences of almost six percentage points in the prevalence of overweight/obese children when applying different thresholds of overweight/obesity (Gonzalez-Casanova et al., 2013). Most MCS users classified overweight/obesity using International Obesity Taskforce (IOTF) thresholds (29/49 studies); less commonly researchers used Centres for Disease Control (CDC) thresholds (6/49 studies), the World Health Organisation thresholds (4/29) and the UK90 thresholds (4/49). MCS data have traditionally been deposited with pre-constructed variables reflecting International Obesity Taskforce thresholds for obesity2. Meanwhile the National Obesity Observatory (at the time) recommended that in England, the British 1990 (UK90) growth reference charts should be used to determine the weight status of an individual child and population of children3, although with the caveat that other thresholds may be more appropriate dependent on the research question (National Obesity Observatory, 2011). Perhaps most concerning was that some users failed to report which definition was used (6/49 studies), impeding the comparison of results entirely.\n\n\nDiscussion\n\n1998 saw the announcement that funding would be provided for a new cohort study tracking the development of individuals born in the new millennium. Joshi & Fitzsimons (2016) outline some of the founding principles of the MCS including that the study should ‘capture as much detail on the child’s origins that may later turn out to be relevant’ to explain differentials in life course trajectories and outcomes. Meeting the needs of diverse groups of end users of a multipurpose study, including policy-makers, third sector organisations, academics, and ultimately the wider public, is not without its challenges. The properties of individual instruments may be appraised through measuring their reliability and validity, as well as establishing their responsiveness to change longitudinally, and determining the substantive focus of such instruments is usually dependent on the research question. Beyond measuring the scientific properties of the questions, there is no (known) standard method for evaluating the content of a survey, or more importantly for measuring the impact or success of fielding different instruments.\n\nIn the current mapping exercise, a simple search strategy that was implemented across a number of different databases significantly outperformed the existing methods of identifying studies using MCS data, identifying over 481 studies in total. Through systematically mapping the usage of ten different areas of questions or instruments in the published grey and peer-reviewed literature, we also confirmed that data that are collected through a recognised and well-validated scale with defined thresholds for identifying constructs of interest and/or data that can provide a unique insight into a policy-relevant issue, are those most widely used. Unusually, data collected from children themselves (at age 7 years) were not well utilised, although this may reflect the quality of the instruments used to collect these data, as well as the domains covered by these instruments themselves. Nevertheless, collecting self-reported information on domains that are meaningful to children themselves, such as their hobbies and friendships, may be of greater substantive interest in future longitudinal studies and may also serve as a means of engaging cohort members’ future participation.\n\nThis study was one of the first to map systematically how data from a longitudinal survey are used in the literature. To fit within the resources for the exercise, the remit was restricted to ten priority areas which were selected in conjunction with the study management team. This means that while we were able to create a count of MCS studies through systematic means (481 studies), further mapping was more focussed, resulting in limitations in terms of coverage of topic areas (e.g. cognitive development and measures on parental characteristics). We also excluded studies that analysed data collected at age 11 from the systematic review, as the data had only been deposited a short time before conducting the review (a total of three studies were identified as using these data; none falling within our priority areas of interest). There were further limitations to our approach. Firstly, some databases only allow for title and abstract searching. Therefore we were dependent on users including mention of the study somewhere in a word-restricted abstract. We were concerned that this was unlikely to be common practice in economic literature in particular and expanded the search parameters, although this produced no additional results after screening. Relying on title and abstract is also likely to mean that we have undercounted working papers and conference papers, where the abstract is often unavailable or is not indexed. Furthermore, use of MCS data by third sector organisations as part of reports or briefings is also likely to be underrepresented. Encouraging authors to name the data source in the title/abstract would increase the likelihood of discovery in future studies, and is a recommendation that has implications beyond the MCS. A second limitation is that our conclusions around the utilisation of different topics was based on identifying these as the focus of a paper. Often this status can be hard to ascertain and is accompanied by a degree of subjectivity. While we did employ a standard definition in our screening, this may still have been open to interpretation, particularly in terms of studies testing a range of different predictors simultaneously with only a broad research question guiding variable selection. A third limitation was that our conclusions around utilising data are based on studies publishing their findings. Very few studies reported results that were not statistically significant for their variable of focus; Kelly and colleagues’ study provided one of the few examples where indicative although statistically insignificant associations were the focus of the paper (Kelly et al., 2013).\n\nRecommendations, which are also applicable to other longitudinal studies, can be made around how future data usage mapping exercises could be facilitated through the further development of a community of MCS users. Establishing a searchable database of MCS users could help to foster a community of users. The database could hold a short entry with users’ contact details, topic areas of interest and key variables of interest. This would allow MCS users to develop links with others with similar interests, and potentially foster collaborations between users and across institutions. This database could also be used as the basis of future work in contacting users for consultations for future sweeps and other forms of user engagement. Participation in such a database would be voluntary although it could be encouraged when users obtain the data. Similarly, enhancing the functionality of the existing library of publications could allow for the recording of a greater number of study level data. For example, users notifying CLS of new publications could be invited to complete a template of meta-data about their publication including, for example, keywords and key variables used in the analysis. This enhanced functionality would assist in future exercises aimed at tracing MCS data usage and would also be beneficial to future researchers to identify where data have been used previously and where they are underutilised. Further guidance or emphasis of the importance of naming of the MCS in publications’ titles, abstracts or keywords when users obtain data may facilitate future reviews of data usage, and may give additional prominence to the study in the literature. Finally, most variables included in MCS surveys go through a process of consultation which involves a written case being made for their inclusion. Publishing a record of this case for inclusion for new variables could allow other users to understand why variables have been included. For example, in the case of hobbies data, which are not widely used, publishing this information could allow users to understand the rationale underlying new questions and may stimulate further use of the data.\n\nThe mapping exercise showed that a systematic approach to obtaining counts of overall study data usage is feasible. Detailed exploration of individual variable usage for a study as large as the MCS required limits to be placed on the scope. Nevertheless, over 150 unique studies were profiled further and the exercise confirmed the properties of variables that are highly utilised, and those whose usage remained relatively dormant. It also uncovered specific issues (not insurmountable) and incompatibilities between the way in which MCS data were collected and deposited and the wider practices and recommendations of the research community.\n\nThe systematic mapping approach exhibited strengths in being able to build a detailed depiction of published variable usage that allowed for the understanding of levels, patterns and results of usage, and facilitators and barriers to variable usage. We would welcome further exploration in terms of how a systematic approach to discovering, mapping and synthesising literature could be integrated with the further analysis of MCS and other longitudinal data. An example might be the investigation of the relationship between BMI and behavioural outcomes. A systematic review could be conducted of studies using MCS data on BMI and child behaviour to synthesise the conceptual frameworks and to help design a model to be tested in the data, with covariates selected based on the results and/or recommendations of previous studies. This synthesised model (based on the synthesis of theory and previous results) could then be tested on MCS data, blending both the systematic review approach and new analysis of the data.\n\nLongitudinal consistency in measures is key to identifying change over time, and the review helped map the degree of consistency in measures, and their utility. This shaped decisions around inclusion of related variables in MCS7 (age 17 years). Proportions classified as overweight and obese were calculated and deposited at UKDS for the first time using both the UK90 and IOTF thresholds for the age 14 years data, with the review results prompting this decision, and providing an impetus for researchers to consider and report the choice of threshold used. Systematic reviewing techniques are a relatively new, although flourishing, approach to the synthesis of research evidence; by contrast, longitudinal studies, such as the 1958-born cohort, have made significant contributions to the advancement of social and medical sciences for decades. Further intersection of both approaches is likely to lead to substantive and methodological innovations, and the results of the current mapping exercise show one of many potential approaches that could be taken to blending both disciplines.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.\n\n\nNotes\n\n1 No known UK equivalent exists\n\n2 At Age 14, data were deposited that included derived variables for overweight/obesity using both IOTF and UK90 thresholds.\n\n3 Also featured here (http://www.noo.org.uk/NOO_about_obesity/measurement/children (Accessed 07/03/16)",
"appendix": "Grant information\n\nWe acknowledge funding from the Economic and Social Research Council in completing this study (grant numbers ES/K005987/1 and ES/M001660/1).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe would like to acknowledge the valuable comments made by Professor Heather Joshi on an earlier version of this paper.\n\n\nSupplementary material\n\nSupplementary File 1. Search syntax used in the current study.\n\nClick here to access the data.\n\n\nReferences\n\nAnderberg D, Chevalier A, Wadsworth J: Anatomy of a health scare: education, income and the MMR controversy in the UK. J Health Econ. 2011; 30(3): 515–530. PubMed Abstract | Publisher Full Text\n\nBonell C, Fletcher A, McCambridge J: Improving school ethos may reduce substance misuse and teenage pregnancy. BMJ. 2007; 334(7594): 614–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBorenstein M, Hedges LV, Higgins JPT, et al.: Introduction to meta-analysis. New York: John Wiley & Sons. 2011. Reference Source\n\nBrophy S, Cooksey R, Gravenor MB, et al.: Risk factors for childhood obesity at age 5: analysis of the millennium cohort study. BMC Public Health. 2009; 9: 467. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChanfreau J, Lloyd C, Byron C, et al.: Predicting wellbeing. 2013. Reference Source\n\nConnelly R: Drivers of unhealthy weight in childhood: analysis of the Millennium Cohort Study. (Scottish Government social research: public services and government) (9781780453095). Retrieved from Edinburgh, 2011. Reference Source\n\nConnelly R, Platt L: Cohort profile: UK Millennium Cohort Study (MCS). Int J Epidemiol. 2014; 43(6): 1719–1725. PubMed Abstract | Publisher Full Text\n\nCorden A, Millar J: Time and change: A review of the qualitative longitudinal research literature for social policy. Soc Policy Soc. 2007; 6(4): 583–592. Publisher Full Text\n\nDelaney L, Doyle O: Socioeconomic differences in early childhood time preferences. J Econ Psychol. 2012; 33(1): 237–247. Publisher Full Text\n\nDillenburger K, Jordan JA, McKerr L, et al.: The Millennium child with autism: early childhood trajectories for health, education and economic wellbeing. Dev Neurorehabil. 2015; 18(1): 37–46. PubMed Abstract | Publisher Full Text\n\nFlouri E, Midouhas E, Joshi H: Family poverty and trajectories of children's emotional and behavioural problems: the moderating roles of self-regulation and verbal cognitive ability. J Abnorm Child Psychol. 2014; 42(6): 1043–1056. PubMed Abstract | Publisher Full Text\n\nFlouri E, Tzavidis N, Kallis C: Area and family effects on the psychopathology of the Millennium Cohort Study children and their older siblings. J Child Psychol Psychiatry. 2010; 51(2): 152–161. PubMed Abstract | Publisher Full Text\n\nGonzalez-Casanova I, Sarmiento OL, Gazmararian JA, et al.: Comparing three body mass index classification systems to assess overweight and obesity in children and adolescents. Rev Panam Salud Publica. 2013; 33(5): 349–355. PubMed Abstract\n\nGoodman R: The Strengths and Difficulties Questionnaire: a research note. J Child Psychol Psychiatry. 1997; 38(5): 581–586. PubMed Abstract | Publisher Full Text\n\nGough D, Oliver S, Thomas J: Introducing systematic reviews. In D. Gough, S. Oliver, & J. Thomas (Eds.), An Introduction to Systematic Reviews. London: Sage. 2012. Reference Source\n\nGriffiths LJ, Dowda M, Dezateux C, et al.: Associations between sport and screen-entertainment with mental health problems in 5-year-old children. Int J Behav Nutr Phys Act. 2010; 7: 30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHartas D: Children's language and behavioural, social and emotional difficulties and prosocial behaviour during the toddler years and at school entry. British Journal of Special Education. 2011; 38(2): 83–91. Publisher Full Text\n\nHartas D: Children's social behaviour, language and literacy in early years. Oxf Rev Educ. 2012; 38(3): 357–376. Publisher Full Text\n\nHogan AE, Scott KG, Bauer C: The Adaptive Social Behaviour Inventory (ASBI): A new assessment of social competence in high risk three-year-olds. J Psychoeduc Assess. 1992; 10: 230–239. Publisher Full Text\n\nJohnson J: Millennium Cohort Study: Psychological, Developmental and Health Inventories. Review of. Centre for Longitudinal Studies. 2012. Reference Source\n\nJohnson W, Li L, Kuh D, et al.: How Has the Age-Related Process of Overweight or Obesity Development Changed over Time? Co-ordinated Analyses of Individual Participant Data from Five United Kingdom Birth Cohorts. PLoS Med. 2015; 12(5): e1001828; discussion e1001828. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJokela M: Characteristics of the first child predict the parents' probability of having another child. Dev Psychol. 2010; 46(4): 915–926. PubMed Abstract | Publisher Full Text\n\nJoshi H: ‘Non-cognitive’skills: What are they and how can they be measured in the British cohort studies. Retrieved from London. 2014. Reference Source\n\nJoshi H, Fitzsimons E: The Millennium Cohort Study: the making of a multi-purpose resource for social science and policy. Longit Life Course Stud. 2016; 7(4): 409–430. Publisher Full Text\n\nJoshi H, Nasim B, Goodman A: The Measurement of Social and Emotional Skills and their Association with Academic Attainment in British Cohort Studies. Non-cognitive Skills and Factors in Educational Attainment. Springer. 2016; 239–264. Publisher Full Text\n\nKelly Y, Becares L, Nazroo J: Associations between maternal experiences of racism and early child health and development: findings from the UK Millennium Cohort Study. J Epidemiol Community Health. 2013; 67(1): 35–41. PubMed Abstract | Publisher Full Text\n\nKelly Y, Iacovou M, Quigley MA, et al.: Light drinking versus abstinence in pregnancy - behavioural and cognitive outcomes in 7-year-old children: a longitudinal cohort study. BJOG. 2013; 120(11): 1340–1347. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKneale D, Patalay P, Khatwa M, et al.: Piloting and producing a map of Millennium Cohort Study Data usage: Where are data underutilised and where is granularity lost? Retrieved from London. 2016. Reference Source\n\nKrosnick JA: Survey research. Annu Rev Psychol. 1999; 50(1): 537–567. PubMed Abstract | Publisher Full Text\n\nMidouhas E, Yogaratnam A, Flouri E, et al.: Psychopathology trajectories of children with autism spectrum disorder: the role of family poverty and parenting. J Am Acad Child Adolesc Psychiatry. 2013; 52(10): 1057–1065.e1. PubMed Abstract | Publisher Full Text\n\nNational Obesity Observatory: A simple guide to classifying body mass index in children. Retrieved from London. 2011. Reference Source\n\nNHS Choices: Why 5 A DAY? 2015. Reference Source\n\nParkes A, Sweeting H, Wight D, et al.: Do television and electronic games predict children's psychosocial adjustment? Longitudinal research using the UK Millennium Cohort Study. Arch Dis Child. 2013; 98(5): 341–348. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPearce A, Elliman D, Law C, et al.: Does primary immunisation status predict MMR uptake? Arch Dis Child. 2009; 94(1): 49–51. PubMed Abstract | Publisher Full Text\n\nPower C, Elliott J: Cohort profile: 1958 British birth cohort (National Child Development Study). Int J Epidemiol. 2006; 35(1): 34–41. PubMed Abstract | Publisher Full Text\n\nPresser S, Couper MP, Lessler JT, et al.: Methods for testing and evaluating survey questions. Public Opin Q. 2004; 68(1): 109–130. Publisher Full Text\n\nPronzato C, Arnstein A: Marital breakup and children's behavioural responses. IDEAS Working Paper Series from RePEc. Turin: University of Turin. 2013. Reference Source\n\nSammons P, Sylva K, Melhuish E, et al.: Measuring the impact of pre-school on children's social/behavioural development over the pre-school period. Institute of Education. 2003. Reference Source\n\nSchoon I, Bynner J: Risk and resilience in the life course: implications for interventions and social policies. J Youth Stud. 2003; 6(1): 21–31. Publisher Full Text\n\nSullivan A, Joshi H: Child Health. In K Hansen & H Joshi (Eds.), Millennium Cohort Study Third Survey: a user’s guide to initial findings. London: Centre for Longitudinal Studies, Institute of Education, University of London. 2008.\n\nThomas J, Brunton J, Graziosi S: EPPI-Reviewer 4.0: software for research synthesis. Retrieved from London. 2010. Reference Source\n\nTwamley K, Brunton G, Sutcliffe K, et al.: Fathers' involvement and the impact on family mental health: evidence from Millennium Cohort Study analyses. Community, Work & Family. 2013; 16(2): 212–224. Publisher Full Text\n\nWarden D, Cheyne B, Christie D, et al.: Assessing Children’s Perceptions of Prosocial and Antisocial Peer Behaviour. Educational Psychology. 2003; 23(5): 547–567. Publisher Full Text\n\nZilanawala A, Sacker A, Nazroo J, et al.: Ethnic differences in children's socioemotional difficulties: Findings from the Millennium Cohort Study. Soc Sci Med. 2015; 134: 95–106. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "38786",
"date": "15 Oct 2018",
"name": "Summer Sherburne Hawkins",
"expertise": [
"Reviewer Expertise Social epidemiologist who used sweeps 1 and 2 of the MCS in my dissertation"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper systematically examined all published literature (481 papers) that analyzed the Millennium Cohort Study (MCS) across 10 pre-selected domains. The MCS is the youngest birth cohort in the UK and will continue across the life course as others have done successfully, such as the 1958 cohort. The authors did a really nice job highlighting important ethical considerations, including the collection of data but researchers not using it. While the authors list four aims of the paper, it feels more that the paper is about identifying how the MCS papers intersect with these 10 areas. The paper is structured more like a research article, with methods and results, rather than specifically exploring the other three areas. For example, the authors mention that aim four is to “discuss the implications for future survey design”, which should be part of the discussion rather than a separate aim. There are a few other weaknesses of the paper that should be addressed.\n\nA challenge is understanding how the 10 priority areas were chosen. While it’s understandable that all topics cannot be explored due to the sheer volume of variables at each sweep, it’s not completely clear how these 10 areas were selected. By consensus of the authors? Through discussion with experts? Policy priorities? For example, another topic could be about inequalities with a focus on poverty as the MCS over-sampled disadvantaged children. Page 10 states “ten priority areas which were selected in conjunction with the study management team”. This information should have been included in the methods rather than the discussion. Furthermore, a study limitation was that other important areas were not included, which would have helped to identify gaps in the literature.\n\nIn addition, page 6 describes the information extracted from the studies and it would have been useful to have a table of summary statistics with these data mapped across the 10 areas. For example, the SDQ, which was used across the largest number of studies, could have been completed by multiple informants and it would have been interesting to know how many studies took advantage of this aspect. On a related note, data have been collected from partners (primarily fathers), older siblings, and teachers. Due to the ethical issues raised previously about making sure that data are used, it’s important to understand how and whether researchers are using these multiple sources of information.\n\nAnother limitation is the lack of quantitative data presented in the paper, particularly when describing the results of the review. The discussion of the specific measures in the results is too general and does not provide enough of the specifics that were extracted from each study. I would have preferred that specific information about each measure, such as issues related to cut-off of BMI, be included in the body of the results rather than the section titled, “Specific issues around granularity and data usage”. The authors have raised an important point about differences in cut-offs and what that means for interpretation and implications of results. The authors mention that IOTF-generated thresholds were constructed and included in the dataset, which sounds like a good recommendation to encourage consistency across studies.\n\nThe paper would benefit from summary statistics, such as:\nThe authors make the statement: “Contrary to our expectations, we saw little evidence of granularity being ‘lost’ in this way, although this is likely a reflection of these data being underutilised.” It is challenging to understand how this was determined – through a quantitative analysis or were there specific criteria? Similarly, it would be helpful if the authors could quantify utilization: “Unusually, data collected from children themselves (at age 7 years) were not well utilised,…”\nAdditional information that would have been useful to quantify:\nHow many studies included children who were singletons versus multiple births. A challenge of many analyses is using twins or triplets and they were often excluded. It raises the question as to whether they should be surveyed at all? How many studies included data on fathers, older siblings, and teachers? How many studies included information on the neighborhood context? A study on obesity could have looked at neighborhood quality and obesity. How many looked at outcomes outside the ‘health’ arena in these 10 areas? Such as social, educational, or behavioral exposures or outcomes?\n\nHere are comments specific to each section: Abstract\nIs “novel” analyses necessary? Is this signifying unique analyses, meaning without duplication?\nMethods\nPage 3, First sentence: “through an explicit” Is this supposed to be “thorough and explicit”? Page 5, “albeit with the caveats outlined in the conclusion”. This phrase is not clear – please summarize the caveats mentioned in the conclusion. Page 5, what % of the reviewers matched at different points in the selection process? Since papers were only screened if they had variables related to the topics in Table 1, it seems even more important to provide justification on how these topics were chosen. Another approach could have been to review all studies relating to the MCS (additional 258 in Figure 1) and organize the studies around themes to identify what topics have been examined versus gaps.\nResults\nPage 6, “The total number of unique MCS studies identified was 481.” However, why is 481 not identifiable in Figure 1?\n\nDiscussion\nIn Figure 1, it is not clear how many articles were discarded if the variable of interest was not the focus of the paper. It seems that this could be quantified. Another area to discuss is the ability of the MCS to be used in cross-cohort comparisons – how do the variables compare across UK cohorts or across cohorts from other countries? Data will be utilized by more researchers if they can be compared across multiple domains.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "41846",
"date": "28 Jan 2019",
"name": "Polina Obolenskaya",
"expertise": [
"Reviewer Expertise Quantitative social scientist with expertise across the areas of sociology and social policy",
"experienced user of the four British Cohort Studies",
"including the Millennium Cohort Study."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very interesting and useful article, which makes a contribution to the knowledge about the use of the Millennium Cohort Study data in general and a utilisation of the sub-set of child-level measures overtime specifically; and to the methodology of applying systematic mapping techniques to reviewing longitudinal data use. The aims of this article are: to firstly identify all the unique studies that use MCS for primary data analysis; and, secondly, to explore the way the data was used (under-explored areas/rarely used response categories), focusing on a subset of areas of interest. I feel that the authors have fully accomplished their aims and I am certain that this article will prove to be of benefit to a wide range of readers. On the whole I found it a pleasant and easy read, which provides the right amount of detail, and is well structured and coherent. I would therefore recommend it for indexing after considering some of my minor suggestions listed below:\n\nDetailed comments:\n\nThe abstract:\n\nI would suggest the authors to be a little bit more specific in writing so that it is more obvious what their study did from reading the abstract alone.\nI am not sure I understand the “background” section - do the authors mean to say that their paper is the only one that systematically explored MCS data? If so, perhaps re-phrase the sentence to make it clearer.\n\nIn the “methods” section of the abstract: when the authors refer to “(i) explore previous exercises”, I think the authors should be a little more explicit and say something like “(i) explore previous exercises which examined the use of MCS data”. Otherwise it sounds like there were other systematic mapping exercises.\n\nAgain, in the “methods” section of the abstract: when referring to own approach, expand a little to say something along the lines of: “our approach, namely systematic mapping” to be precise.\n\nMethods & results sections:\n\nThe method of the systematic mapping is clearly described within the methods section, helpfully contrasted with the systematic review.\n\nThe term “systematic mapping”, which is the review method used in this work, does not appear in the paper until quite far along in the methods section (with the exception of a brief mentioning of it at the end of the introduction). I think it should come up earlier: for example, the end of the first paragraph of the section “Previous approaches in reviewing longitudinal data usage”, could, in my opinion, benefit from explicitly referring to the manuscript’s approach as “systematic mapping”.\n\nAlthough it is understandable that to review all the areas of MCS data collection would not be within the scope of this work, it would be useful to get a sense of the reasons for choosing these particular topics, even if briefly. And perhaps, later in the paper to say something, about the focus of studies not included in the in-depth analysis (even if it is just to name a few big obvious areas that the MCS data is used for in those publications).\n\nI thought that the authors did a good job in clearly describing their search methodology, specifying search terms, providing the description of the way the initial screening and subsequent screenings took place and the information extracted with a supplementary flow diagram. My one suggestion would be for the authors to make the total number of studies identified, which is mentioned in the text of the article (n=481), more visible in the flow chart (Figure 1) as it currently takes some figuring out and adding two numbers together to get to it.\n\nIn terms of reproducing the results and for the full transparency of the review, I think a date (dates) when the searches were undertaken and the CLS list of studies was obtained should be included within the methodology description. It would also be good to see to see a full list of studies (all 481 of them), as well as the identified selected studies for the in-depth analysis, in a separate appendix.\n\nI think specifying exactly which sweeps of data the measures are available in would be useful for Table 2.\n\nI would also suggest to move the introduction of new information – the fact that studies using age 11 sweep were excluded from the review - from the discussion section of the article to the methodology.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1559
|
https://f1000research.com/articles/7-1557/v1
|
26 Sep 18
|
{
"type": "Research Article",
"title": "Neuraxial anesthesia for postpartum tubal ligation at an academic medical center",
"authors": [
"Carlos Delgado",
"Wil Van Cleve",
"Christopher Kent",
"Emily Dinges",
"Laurent A. Bollag",
"Wil Van Cleve",
"Christopher Kent",
"Emily Dinges",
"Laurent A. Bollag"
],
"abstract": "Background: Use of an in situ epidural catheter has been suggested to be efficient to provide anesthesia for postpartum tubal ligation (PPTL). Reported epidural reactivation success rates vary from 74% to 92%. Predictors for reactivation failure include poor patient satisfaction with labor analgesia, increased delivery-to-reactivation time and the need for top-ups during labor. Some have suggested that this high failure rate precludes leaving the catheter in situ after delivery for subsequent reactivation attempts. In this study, we sought to evaluate the success rate of neuraxial techniques for PPTL and to determine if predictors of failure can be identified. Methods: After obtaining IRB approval, a retrospective chart review of patients undergoing PPTL after vaginal delivery from July 2010 to July 2016 was conducted using CPT codes, yielding 93 records for analysis. Demographic, obstetric and anesthetic data (labor analgesia administration, length of epidural catheter in epidural space, top-up requirements, time of catheter reactivation, final anesthetic technique and corresponding doses for spinal and epidural anesthesia) were obtained. Results: A total of 70 patients received labor neuraxial analgesia. Reactivation was attempted in 33 with a success rate of 66.7%. Patient height, epidural volume of local anesthetic and administered fentanyl dose were lower in the group that failed reactivation. Overall, spinal anesthesia was performed in 60 patients, with a success rate of 80%. Conclusions: Our observed rate of successful postpartum epidural reactivation for tubal ligation was lower than the range reported in the literature. Our success rates for both spinal anesthesia and epidural reactivation for PPTL were lower than the generally accepted rates of successful epidural and spinal anesthesia for cesarean delivery. This gap may reflect a lower level of motivation on behalf of both the patients and anesthesia providers to tolerate “imperfect” neuraxial anesthesia once fetal considerations are removed.",
"keywords": [
"obstetrical anesthesia",
"spinal anesthesia",
"epidural anesthesia",
"general anesthesia",
"tubal sterilization",
"postpartum period"
],
"content": "Introduction\n\nTubal ligation in the immediate postpartum period, postpartum tubal ligation (PPTL), is typically performed on the labor ward, but less than 50% of women that desire PPTL receive the procedure in the immediate postpartum period, despite The American College of Obstetricians and Gynecologists (ACOG) defining PPTL as an urgent procedure due to the limited optimal surgical timeframe1. This gap between patient preference and observed outcome emphasizes the importance of an evidence-based anesthetic approach for PPTL to reduce barriers to receiving anesthetic care in the early postpartum period. Use of an existing labor epidural catheter has been proposed as an efficient way to provide anesthesia for PPTL2. Older studies reported epidural reactivation success rates varying from 74% to 92%2–4. Recently, Powell et al. reported an epidural reactivation success rate of 78% in a prospective observational study of anesthesia for PPTL and outlined the risk factors for failed reactivation of an epidural catheter. Predictors of failure included poor patient satisfaction with labor analgesia, increased delivery-to-reactivation time, and the need for manual top-ups during labor and delivery5. Within our practice, and sometimes in the broader obstetric anesthesia community, providers have suggested that rates of failure when attempting catheter “reactivation” for PPTL do not support the practice leaving a labor epidural catheter in place for an interval PPTL. Instead, these providers advocate the routine removal of the epidural catheter followed by a de novo spinal anesthetic (SA). This study evaluated the frequency of success at our center using the aforementioned anesthetic techniques for PPTL and sought to determine if there are clinical success predictors that can aid in anesthetic decision-making.\n\n\nMethods\n\nThis retrospective observational study was approved by the University of Washington Review Board, which waived the requirement of informed consent (HSD Study STUDY0000117). A chart review of all medical records in the labor and delivery unit from July 2010 to July 2016 was conducted to identify patients with CPT codes for bilateral tubal ligation that occurred consecutively after vaginal delivery. No exclusion criteria were applied.\n\nData collected for each case included demographic data (age, body mass index), obstetric data (gravidity, parity, gestational age achieved) and anesthetic data (type of labor analgesia: combined-spinal epidural or straight lumbar epidural; number of regional anesthesia attempts performed; length of epidural catheter in epidural space, top-up requirements during labor, time of catheter reactivation after delivery, if applicable; and the initial and final anesthetic techniques used to complete the case: successful epidural reactivation, de novo spinal anesthetic or general anesthesia). Perioperative doses for medications for spinal and epidural anesthetics as well as the use of supplemental sedative/hypnotic agents were also collected. Successful epidural reactivation was defined as completion of the surgical procedure under epidural analgesia.\n\nStatistical analysis of collected data was performed using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria). Univariate distributions are described as proportions, means and standard deviations, or medians and interquartile ranges, as appropriate. Continuous variables were compared using t-tests and ordinal variables were compared using Fisher’s exact test. Statistical significance was pre-specified as p < 0.05.\n\n\nResults\n\nData from 93 patients were analyzed. Neuraxial analgesia for labor was used in 70 patients (75%). Of these patients that received labor analgesia, 33 (47%) underwent attempts at reactivation, with a success rate of 66.7% (22 patients). For this group of patients, the mean documented length of catheter in space was 4.9 (± 0.3) cm. A total of four patients (18%) required top-ups during labor. Median time to reactivation after delivery was 4.8 (IQR 3.3–9.8) hours. The mean volume of local anesthetic used to initiate anesthesia for the surgical procedure was 21.7 (± 7.6) ml. The mean epidural fentanyl dose was 86.7 (± 22.9) µg. Intravenous midazolam (1.9 ± 0.5 mg) and fentanyl (68.2 ± 35.5 µg) were also given in 16 patients. When comparing the characteristics of successful and unsuccessful epidural reactivations, we observed that patient height (163 ± 7.2 cm versus 158 ± 6.0 cm, p = 0.03), volume of local anesthetic administered during reactivation (21.7 ± 7.6 ml versus 14.8 ± 13.2 ml, p = 0.03), and dose of epidural fentanyl (86.7 ± 22.9 µg versus 63 ± 24.4 mcg, p = 0.03) were lower in the group that failed catheter reactivation. Total intravenous fentanyl was also higher (127.1 ± 57.6 µg) in this group compared to the successful group (68.2 ± 35.5 µg) (p = 0.007) (Table 1).\n\nAll data presented as mean ± standard deviation; median, interquartile range; percentage. T-test and Fisher’s exact test, p < 0.05 for statistical significance. BMI, body mass index; IQR, interquartile range; IV, intravenous.\n\nIn patients in which reactivation was unsuccessful, a rescue SA was attempted in five cases with a success rate of 80%. The patient that failed SA (bupivacaine 11.2 mg, no opioid) had received large amounts of epidural solution at reactivation (30 ml chloroprocaine 3% and 10 ml lidocaine 2%). General anesthesia was the final anesthetic technique for the remaining unsuccessful epidural reactivations and the failed SA.\n\nEpidural reactivation was not attempted in 37 patients (53%). There were two cases that received general anesthesia as the primary technique. One patient received combined spinal epidural (CSE) anesthesia as the primary technique due to maternal congenital cardiac disease. SA was performed after removal of epidural catheter in 34 patients. This technique was successful in 25 patients (74%). For the cases in which SA block was not achieved, general anesthesia (8 patients) or CSE anesthesia (1 patient) were used.\n\nSingle-shot spinal anesthesia in patients with pre-existing epidural catheters (combining those in whom epidural reactivation was and was not attempted) was performed in 39 patients, with an overall success rate of 74%. An attempt to reactivate the catheter prior to spinal placement had been carried out in 4 patients (13.8%), with a median elapsed time of 3.3 (IQR 0.7–9.6) hours after delivery and an average volume of 12.7 ± 8.6 ml of local anesthetic. Due to insufficient levels of anesthetic blockade after attempt at reactivation, SA was chosen as the rescue anesthetic technique. The mean volume of local anesthetic (hyperbaric bupivacaine 0.75%) was 1.5 ± 0.3 ml. The mean intrathecal fentanyl dose was 14.2 ± 6.4 µg.\n\nData regarding demographic, obstetric and anesthetic variables comparing successful versus unsuccessful SA in patients with a pre-existing epidural catheter is presented in Table 2. Apart from a more advanced gestational age in the failed SA group, no statistical differences existed between successful and unsuccessful SAs.\n\nAll data presented as mean ± standard deviation; median, interquartile range; percentage. T-test and Fisher’s exact test, p < 0.05 for statistical significance. *Only one patient in this group underwent reactivation. No statistical calculations were performed. BMI, body mass index; IQR, interquartile range; IV, intravenous; NA, not applicable.\n\nOf the 23 patients who did not have a pre-existing epidural catheter at the time of PPTL, 21 (91%) received a single-shot spinal block, with a success rate of 91%. The remaining two cases were performed under general anesthetic and with CSE as initial technique. The mean volume of local anesthetic (hyperbaric bupivacaine 0.75%) was 1.5 ± 0.2 ml. The mean intrathecal fentanyl dose was 15.2 ± 6.8 µg. Intravenous midazolam (mean 2.1 ± 0.9 mg) and fentanyl (mean 87.5 ± 59.4 µg) were also given as adjuvants. No statistical analysis was performed to compare success at performing SA in patients without pre-existing epidurals given the high success rate of this technique.\n\nA review of all cases of attempts at SA (patients with a prior epidural catheter, irrespective of attempts at reactivation, and patients without a pre-existing epidural catheter) revealed a spinal block success rate of 80% (48 of 60 cases). Mean intrathecal doses were 1.5 ± 0.2 ml of hyperbaric bupivacaine 0.75% and 14.2 ± 6.2 µg of fentanyl. Intrathecal fentanyl doses above 20 µg added to bupivacaine were associated with spinal failure (p = 0.001). No other demographic, obstetric or anesthetic factors were statistically different (Table 3).\n\nAll data presented as mean ± standard deviation; median; percentage. T-test and Fisher’s exact test, p < 0.05 for statistical significance. BMI, body mass index.\n\nThe final distribution of anesthetic technique and success used for the PPTL is presented in Figure 1.\n\n\nDiscussion\n\nIn a review of 6 years of data from our practice, we observed a success rate of 67% when attempting to use in situ epidural catheters for PPTL, lower than we expected given the published literature on this topic. A recent retrospective review (n = 202) of PPTL anesthesia reported an epidural reactivation success rate of 74%6. A prospective study (n = 100) designed to assess the risk factors for failed epidural reactivation reported a success rate of 78%5. In analyzing our data, we found no association between previously noted risk factors and epidural failure. Vincent et al. reported that a shorter time interval between delivery and reactivation attempt was a predictor for success, with reinjection within 4 hours of delivery having the highest success rate3. These findings are in agreement with data from the recent prospective study by Powell et al.5. Others have suggested that the time interval between catheter insertion and reactivation is of importance, with a period less than 24 hours as a more reliable predictor of success compared to time between delivery and reactivation6. Our findings do not reveal any relationship between time of catheter placement or delivery and success of reactivation. The majority of catheters were reactivated within 5 hours of delivery, while only one was used more than 12 hours after delivery.\n\nThe immediate postpartum period is the ideal time to perform PPTL due to the ease and convenience for both physicians and patients1. In fact, this procedure is defined by ACOG as an urgent procedure, because failure to accomplish the procedure during the same hospitalization as delivery may make the procedure more complex and increase the risk of unintended pregnancy in the first year following birth7,8. The availability of nursing, anesthesia, and obstetric staff for this procedure alongside busy workloads on the labor ward may contribute to failures in achieving a pre-discharge PPTL.\n\nThe probability of pain during surgery has been linked with the height of the patient under epidural anesthesia, but only in very short and tall patients, and it significantly interacts with weight9. Although we observed that the mean height in the group that failed reactivation was lower (close to 5 cm), we struggle to find a biological plausibility to this finding. Administration of higher volumes of epidural solution increases dermatomal spread, particularly with bolus administration10. The group in which reactivation was unsuccessful received a lower volume of both local anesthetic and opioid. Epidural medication is typically incrementally titrated during catheter reactivation. If, early during the reactivation attempt, an inadequate or patchy block is noted, epidural reactivation is typically aborted to avoid a potentially high rescue SA. This could potentially explain the lower volume of local anesthetic used in that group. While an increased use of fentanyl in the failed epidural reactivation group is observed, this corresponds to the total dose, which includes cases with conversion to general anesthesia.\n\nFor those without preexisting epidural analgesia, the successful use of spinal anesthesia with hyperbaric bupivacaine with or without opioids has been reported by multiple authors11,12. This technique was favored by most providers for patients who had labored without neuraxial analgesia with a high rate of success. Administration of a dose of local anesthetic similar to that used for cesarean deliveries (e.g. 12 mg of bupivacaine) seems to provide sufficient anesthesia for PPTL12,13.\n\nWhen spinal anesthesia was attempted in patients with a history of epidural analgesia during labor, however, our success rate was found to be only 80%. This is a higher rate than the reported 2–6% range of failure described for spinal anesthesia for cesarean delivery14,15 and clearly higher than the 1% conversion rate of regional to general anesthesia due to failed spinal anesthesia, as recommended by the Royal College of Anesthetists16. Spinal anesthesia failure was three times higher in women where an epidural was used but not topped up for emergent cesarean delivery in a large retrospective audit of over 5000 cesarean deliveries15. Clear free flowing cerebrospinal fluid (CSF) is associated with a successful spinal block. Prior injection of local anesthetic into the epidural space could lead providers to mistake epidural local anesthetic return through a spinal needle for CSF, which might provide an explanation for failure to achieve SA after epidural analgesia had been performed17.\n\nSome evidence points to a need in increasing doses of local anesthetic used in SA for PPTL to adjust for changes in segmental blockade requirements in the postpartum18. Huffnagle et al. found that while 7.5 mg of hyperbaric bupivacaine provided adequate surgical anesthesia for this procedure, some failed spinals occurred at this dose19. The mean local anesthetic doses used in our institution were similar to our standard for cesarean deliveries (1.4 ml of hyperbaric bupivacaine 0.75%), and we did not find a difference between successful and failed SAs in our patients. We also found that advanced gestational age was linked to failure of SA, whether there had been any epidural space manipulation. Our finding contrasts with reports of inadequate surgical anesthesia for cesarean deliveries in pre-term parturients, even though it was determined that low fetal weight was the main factor implicated20. Notably, fentanyl doses above 20 µg were observed overall in the spinal failure group, without being associated to a decrease in corresponding local anesthetic dosage.\n\nConversion to general anesthesia in the obstetric patient after neuraxial anesthesia placement is often the result of decreased patient tolerance to pain during the procedure, in addition to concerns by the surgical team, as reported by a large retrospective review of over 35,000 spinal anesthetics for cesarean delivery by Guglielmo et al. In this study, SA was impossible to perform in a few rare cases. More commonly, the block was achieved but was insufficient to provide adequate surgical block21. General anesthesia was used in 55% of cases of PPTL in small community hospitals according to the Obstetric Anesthesia Workforce Survey; this contrasts with the use of general anesthesia in less than 25% of the cases of PPTL in large-center referral center hospitals affiliated with university programs22. Ultimately, the decision to use a particular type of anesthetic for PPTL should be individualized based on obstetric and anesthetic factors, as well as taking into account patient preference. Regional anesthesia, however, seems to be the favored approach, as the time for maternal physiology to return baseline in the postpartum period is not well delineated23.\n\nThere are several limitations to our study. As a single-institution retrospective study with small numbers of patients in the subgroups of interest and relatively few procedures, we can simply add to the existing studies on this topic with limited generalizability. Further, even though each record was personally reviewed by one of the authors (all of whom are members of the obstetric anesthesia division) to minimize the amount of missed data and erroneous coding regarding type of anesthesia, our data collection is potentially subject to bias due to anesthetic technique preferences on the part of the research team. Documentation of the reasons for favoring attempts at reactivation over proceeding with spinal anesthesia directly was not consistently found in the clinical records. In most of the cases in which reactivation was not attempted despite the presence of an epidural catheter that functioned well during labor and was left in situ, no rationale for this decision could be found in the medical records. Provider preference could have been guided by either a distrust of a catheter that has not been infused for some time or lack of patience or time available to reactivate the catheter to achieve adequate surgical anesthesia. Prior provider experience with failed epidurals might also have played a part. Some have recommended spinal anesthesia even in parturients with indwelling epidural catheters to avoid less-than-perfect epidural reactivation rates and minimize time delays and costs4. In fact, in a published survey of BTL practices in academic institutions, up to 40% of respondents elect not to leave a catheter in situ to be used after delivery6. Our study is obviously not powered to evaluate complication rates associated with the different anesthetic techniques used for PPTL. One of the largest studies of PPTL found very low rates of complications of any type and, notably, 86% of the procedures in this series were done with general anesthesia24.\n\nIn summary, our study found lower success rates for epidural reactivation for PPTL than those reported in the literature. There was also a lower success rate for spinal anesthetics placed after an epidural catheter was used to provide labor analgesia. We were unable to find clinical predictors for the failure rate. The need for conversion to general anesthesia, besides being attributed to an insufficient block, may reflect a lower level of motivation on behalf of both the patients and anesthesia providers to tolerate suboptimal anesthesia when fetal considerations are no longer a factor and some aspects of maternal physiology are already less concerning for the use of general anesthesia.\n\n\nData availability\n\nDataset 1. Complete data on demographics and the treatment given to each patient surrounding postpartum tubal ligation, including details on treatment method and the pharmaceuticals used (with dose). Also included is a guide to the abbreviations used. DOI: https://doi.org/10.5256/f1000research.16025.d21846625.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThis work was presented as an abstract at the 2017 49th Annual meeting of the Society for Obstetric Anesthesia and Perinatology (SOAP) in Bellevue, WA, USA.\n\n\nReferences\n\nCommittee on Health Care for Underserved Women: Committee opinion no. 530: access to postpartum sterilization. Obstet Gynecol. 2012; 120(1): 212–5. PubMed Abstract | Publisher Full Text\n\nGoodman EJ, Dumas SD: The rate of successful reactivation of labor epidural catheters for postpartum tubal ligation surgery. Reg Anesth Pain Med. 1998; 23(3): 258–61. PubMed Abstract | Publisher Full Text\n\nVincent RD Jr, Reid RW: Epidural anesthesia for postpartum tubal ligation using epidural catheters placed during labor. J Clin Anesth. 1993; 5(4): 289–91. PubMed Abstract | Publisher Full Text\n\nViscomi CM, Rathmell JP: Labor epidural catheter reactivation or spinal anesthesia for delayed postpartum tubal ligation: a cost comparison. J Clin Anesth. 1995; 7(5): 380–3. PubMed Abstract | Publisher Full Text\n\nPowell MF, Wellons DD, Tran SF, et al.: Risk factors for failed reactivation of a labor epidural for postpartum tubal ligation: a prospective, observational study. J Clin Anesth. 2016; 35: 221–4. PubMed Abstract | Publisher Full Text\n\nMcKenzie C, Akdagli S, Abir G, et al.: Postpartum tubal ligation: A retrospective review of anesthetic management at a single institution and a practice survey of academic institutions. J Clin Anesth. 2017; 43: 39–46. PubMed Abstract | Publisher Full Text\n\nRichardson MG, Hall SJ, Zuckerwise LC: Postpartum Tubal Sterilization: Making the Case for Urgency. Anesth Analg. 2018; 126(4): 1225–31. PubMed Abstract | Publisher Full Text\n\nThurman AR, Janecek T: One-year follow-up of women with unfulfilled postpartum sterilization requests. Obstet Gynecol. 2010; 116(5): 1071–7. PubMed Abstract | Publisher Full Text\n\nCuratolo M, Orlando A, Zbinden AM, et al.: A multifactorial analysis to explain inadequate surgical analgesia after extradural block. Br J Anaesth. 1995; 75(3): 274–81. PubMed Abstract | Publisher Full Text\n\nHogan Q: Distribution of solution in the epidural space: examination by cryomicrotome section. Reg Anesth Pain Med. 2002; 27(2): 150–6. PubMed Abstract | Publisher Full Text\n\nHuffnagle SL, Norris MC, Huffnagle HJ, et al.: Intrathecal hyperbaric bupivacaine dose response in postpartum tubal ligation patients. Reg Anesth Pain Med. 2002; 27(3): 284–8. PubMed Abstract | Publisher Full Text\n\nHabib AS, Muir HA, White WD, et al.: Intrathecal morphine for analgesia after postpartum bilateral tubal ligation. Anesth Analg. 2005; 100(1): 239–43. PubMed Abstract | Publisher Full Text\n\nTeoh WH, Ithnin F, Sia AT: Comparison of an equal-dose spinal anesthetic for cesarean section and for post partum tubal ligation. Int J Obstet Anesth. 2008; 17(3): 228–32. PubMed Abstract | Publisher Full Text\n\nPan PH, Bogard TD, Owen MD: Incidence and characteristics of failures in obstetric neuraxial analgesia and anesthesia: a retrospective analysis of 19,259 deliveries. Int J Obstet Anesth. 2004; 13(4): 227–33. PubMed Abstract | Publisher Full Text\n\nKinsella SM: A prospective audit of regional anaesthesia failure in 5080 Caesarean sections. Anaesthesia. 2008; 63(8): 822–32. PubMed Abstract | Publisher Full Text\n\nRussell IF: Technique of anaesthesia for caesarean section. Raising the Standards: A Compendium of Audit Recipes. 2006; 166–7.\n\nEinhorn LM, Habib AS: Evaluation of failed and high blocks associated with spinal anesthesia for Cesarean delivery following inadequate labour epidural: a retrospective cohort study. Can J Anaesth. 2016; 63(10): 1170–8. PubMed Abstract | Publisher Full Text\n\nAbouleish EI: Postpartum tubal ligation requires more bupivacaine for spinal anesthesia than does cesarean section. Anesth Analg. 1986; 65(8): 897–900. PubMed Abstract\n\nHuffnagle SL, Norris MC, Leighton BL, et al.: Do patient variables influence the subarachnoid spread of hyperbaric lidocaine in the postpartum patient? Reg Anesth. 1994; 19(5): 330–4. PubMed Abstract\n\nAdesope OA, Einhorn LM, Olufolabi AJ, et al.: The impact of gestational age and fetal weight on the risk of failure of spinal anesthesia for cesarean delivery. Int J Obstet Anesth. 2016; 26: 8–14. PubMed Abstract | Publisher Full Text\n\nGuglielmo L, Pignataro A, Di Fiore G, et al.: Conversion of spinal anesthesia into general anesthesia: an evaluation of more than 35,000 spinal anesthetics. Minerva Anestesiol. 2010; 76(9): 714–9. PubMed Abstract\n\nTraynor AJ, Aragon M, Ghosh D, et al.: Obstetric Anesthesia Workforce Survey: A 30-Year Update. Anesth Analg. 2016; 122(6): 1939–46. PubMed Abstract | Publisher Full Text\n\nPractice Guidelines for Obstetric Anesthesia: An Updated Report by the American Society of Anesthesiologists Task Force on Obstetric Anesthesia and the Society for Obstetric Anesthesia and Perinatology. Anesthesiology. 2016; 124(2): 270–300. PubMed Abstract | Publisher Full Text\n\nHuber AW, Mueller MD, Ghezzi F, et al.: Tubal sterilization: complications of laparoscopy and minilaparotomy. Eur J Obstet Gynecol Reprod Biol. 2007; 134(1): 105–9. PubMed Abstract | Publisher Full Text\n\nDelgado C, Van Cleve W, Kent C, et al.: Dataset 1 in: Neuraxial anesthesia for postpartum tubal ligation at an academic medical center. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16025.d218466"
}
|
[
{
"id": "38781",
"date": "18 Oct 2018",
"name": "Christine P. McKenzie",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article presents a small retrospective review of anesthetic management for postpartum tubal ligations. The article is well-written and organized. The data are presented clearly and concisely.\n\nThe author outlines limitations of the study such as a small number of patients and generalizability. However, it adds to the current literature on PPTL and higher than expected neuraxial anesthesia failure rates. There are no identified predictors for success/failure of epidural reactivation; however, given the low numbers the study is underpowered to access predictors.\n\nThe article discusses the clinical considerations for anesthetic management and the importance of completing these urgent PPTL procedures.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "40803",
"date": "12 Dec 2018",
"name": "Dana R. Gossett",
"expertise": [
"Reviewer Expertise Obstetric care",
"operative obstetrics",
"evidence-based practice"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments: This is a topic of interest to obstetric anesthesiologists and obstetricians, and highlights the role of obstetric anesthesiologists in providing access to a needed service that often goes unprovided (postpartum tubal ligation—PPTL). The authors examine success rates of pre-existing labor epidurals for surgical anesthesia for tubal ligation.\n\nStudy design: This is a retrospective study of undergoing postpartum tubal ligation at a single institution over a 6 year period. Method of labor analgesia and method of anesthesia for the PPTL were collected, and rate of successful re-use of the labor epidural for the PPTL was calculated.\n\nAbstract:\nThe abstract provides an accurate synopsis of the paper’s design and findings.\n\nIntroduction: Overall the introduction is quite short, but does provide appropriate background and citations. Do the authors have a citation to support the statements “Within our practice, and sometimes in the broader obstetric anesthesia community, providers have suggested that rates of failure when attempting catheter “reactivation” for PPTL do not support the practice leaving a labor epidural catheter in place for an interval PPTL. Instead, these providers advocate the routine removal of the epidural catheter followed by a de novo spinal anesthetic (SA)”?\n\nMaterials & Methods: This is a retrospective review of all women undergoing PPTL over a 6 year period. Data were collected regarding demographic and obstetric factors, anesthetic method and dosing, and factors previously identified as predictors of failure of epidural “reactivation” after delivery.\n\nResults, Tables/Figures: In general, the results are clearly communicated. During the study period, only 93 women undergo PPTL, 70 of whom have labor epidurals. Of these, only 33 actually have an attempt to reactivate the epidural for use during the PPTL—this then represents the target population, a very small number. Of these, 67% had successful reactivation of their epidurals.\n\nFor women who received spinal anesthesia (SA) to complete their PPTL, fewer had successful SA if they’d had a prior epidural (74% successful spinal vs 91% of those who had not had a labor epidural.)\n\nDiscussion: The discussion section is very long in proportion to the remainder of this paper and could be substantially trimmed. Paragraph 2 could be moved to the introduction/background.\n\nWhile the authors state that previously published predictors of epidural re-activation were not confirmed in this study, they are underpowered to evaluate some. For example, the number of patients requiring “top ups” (redoses of the epidural) was 9, with 4 in the successful reactivation group and 5 in the failed group—for rates of 18% and 45%. Because of small numbers, the p-value for this comparison is 0.09, but it is likely that it would reach statistical significance with greater numbers.\n\nOne of the most significant limitations, as acknowledged by the authors, is the lack of information about why/how decisions were made by the anesthesiologists about whether or not to use an existing epidural, and what to use next or instead. This introduces the potential for selection bias of subjects having re-activation attempts.\n\nIt would be ideal if the authors concluded their discussion by tying their findings back to the larger question they are investigating—can they make any recommendations about what the optimal method would be? Or, as their study may not provide this, what type of investigation would they recommend to get to that answer?\n\nSummary: This retrospective study of anesthetic use for PPTL demonstrated a lower rate of successful re-activation of labor epidurals than previous reports. There is no clear explanation of this unanticipated finding.\n\nA prospective trial would better address this by removing clinician bias/practice patterns from the decision-making about anesthetic mode, and limiting differences (known and unknown) between the two groups of patients. Given the infrequency of PPTL, a multicenter trial would be required to accrue patients in a reasonable amount of time, so the choice of a retrospective review is understandable.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "40801",
"date": "04 Feb 2019",
"name": "Feyce Peralta",
"expertise": [
"Reviewer Expertise Obstetric anesthesiologist"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments: Relevant topic for obstetricians and obstetric anesthesiologists with key public health implications. Authors evaluated success rate of different neuraxial techniques for PPTL. Retrospective study using small dataset from a single institution.\n\nIntroduction: “Tubal ligation in the immediate postpartum period, postpartum tubal ligation (PPTL), is typically performed on the labor ward, but less than 50% of women that desire PPTL receive the procedure in the immediate postpartum period, despite The American College of Obstetricians and Gynecologists (ACOG) defining PPTL as an urgent procedure due to the limited optimal surgical timeframe1.” I suggest placing reference number 8 (Thurman) before current reference number 1 (Committee opinion no. 530). Consider splitting this sentence in two.\n\nMethods: Retrospective study of patients that underwent PPTL after vaginal delivery over a 6-year period at a single institution.\nI recommend expanding the definition of successful epidural catheter reactivation (e.g., completion of PPTL under epidural anesthesia with same epidural catheter previously used for labor analgesia and delivery).\n\nThe authors should clarify who performed the chart review. 1 investigator? 2 investigators?\n\nThe authors could include a pregnancy risk stratification (low vs. high risk) as part of their demographic data. Albanese et al. (20171) reported on the request and fulfilment of PPTL according to pregnancy risk. Anesthetic practices are likely to be affected by this variable.\n\nResults: The authors analyzed data from a very small number of patients. Overall, this section is very easy to read and to follow.\n\nDiscussion: The greatest limitation of this study is the small sample size. Therefore, conclusions cannot be made based on this study regarding predictors for the success of neuraxial anesthesia for PPTL. Future directions could be added to the discussion section.\nThe authors should consider moving the second and third sentences to the introduction section.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1557
|
https://f1000research.com/articles/7-1556/v1
|
26 Sep 18
|
{
"type": "Research Article",
"title": "Efficacy of the combination of crude extracts of Solanum nigrum and Plumbago capensis on Leishmania major",
"authors": [
"Christine N. Mutoro",
"Johnson Kinyua",
"Joseph Ng'ang'a",
"Daniel Kariuki",
"Johnson M. Ingonga",
"Christopher O. Anjili",
"Johnson Kinyua",
"Joseph Ng'ang'a",
"Daniel Kariuki",
"Johnson M. Ingonga",
"Christopher O. Anjili"
],
"abstract": "Background: Leishmaniasis is an endemic tropical disease caused by Leishmania parasites, transmitted mainly by phlebotomine sandflies, impacting both health and socioeconomic wellbeing. Currently there are inadequate therapeutic measures to manage the disease thus indicating the need for the development of affordable and effective therapeutic interventions from herbal plants as alternative medicine. This study investigated the in vitro antileishmanial effects of blends of crude extracts of Solanum nigrum and Plumbago capensis against Leishmania major. Methods: The promastigote parasites of Leishmania major were cultured and grown for 3 days in different concentrations of the individual extracts to determine minimum inhibitory concentrations (MIC). The in vitro antileishmanial efficacy was determined by exposing promastigotes and macrophages infected with L. major to the blends of extracts in ratios of 2000:250, 1000:500, 500:1000 and 250:2000. Finally, nitric oxide released by L. major infected macrophages that were treated with the plant extracts at ratio of 125:125 was quantified using a standard nitrite curve. Results: The individual methanol extracts were most effective in inhibiting the growth of promastigotes with MIC values between 0.25 mg/ml and 1.0 mg/ml as compared to aqueous extracts. The most active ratios for the blends were 250:2000 and 2000:250 for methanolic and aqueous blends respectively. The infection rates and multiplication indices associated with all the combined extracts were significantly different (P< 0.05) from those of pentostam and amphotericin B at all the concentrations studied. The OD for the combined test extracts ranged between 0.034 and 0.041 and these corresponded to < 5 µM of NO. Conclusion: Findings from this study demonstrate that combination therapy using S. nigrum and P. capensis extracts is effective in treating Leishmania major infection. Based on our findings we recommend in vivo studies to be conducted to determine the efficacy of these combined therapies against Leishmania major.",
"keywords": [
"Leishmaniasis",
"Leishmania",
"Combined",
"Plumbago capensis",
"Solanum nigrum"
],
"content": "Introduction\n\nLeishmaniases are diseases caused by Leishmania parasites transmitted by the bite of female Phlebotominae sand flies (Piscopo & Mallia, 2006). Desjeux, (2004) indicates that 350 million people are at risk globally, 12 million people are infected with Leishmania parasites and that as many as 2 million new cases occur each year in over 80 Countries.\n\nStudies conducted by Abreu Miranda et al. (2013); de Carvalho & Ferreira (2001) indicates that screening of plant extracts and plant derived compounds is an effective therapy for leishmaniases that avoids exposure to potentially toxic drugs. The World Health Organisation (WHO) (2006), reported that the most pressing research needs for Leishmania control are the search for alternative and cheap drugs for oral, parenteral (injections) or topical administration in shorter treatment cycles, and identification of mechanisms to facilitate access to existing control measures, including health sector reform in some developing countries.\n\nAfter increasing unresponsiveness to most of the monotherapeutic regimens, combination therapy has found new scope in the treatment of leishmaniasis. The findings of Jha et al. (2005) indicated that the combination of antileishmanial drugs could reduce the potential toxic side effects, prevent drug resistance and increase their efficacy in conjunction. Firooz et al. (2006) and Mishra et al. (2011) reported the superiority of the combination of paromomycin with other drugs for the treatment of visceral leishmaniasis. Studies by Jha et al. (2005) and Firooz et al. (2006) which evaluated combined chemotherapy against visceral leishmaniasis in Kenya using oral allopurinol and endogenous pentostam demonstrated the superiority of the combined drugs. Research shows that natural libraries of plant compounds with recognized antiparasitic activities can be screened and used in development of antileishmanial compounds. This study investigated the effect of combining crude extracts of Solanum nigrum and Plumbago capensis on Leishmania major parasites in vitro.\n\n\nMethods\n\nThe proposal for this research work was submitted to the KEMRI Scientific Steering Committee (SSC), for approval and was given ethical clearance (Number: KEMRI/SSC-2028) on the use of the mice as the animal model by the Ethical Review Committee (ERC). All experimental animals at the end of the experiment were sacrificed by injection of 100 µl sagatal and disposed of according to the regulations of Animal Care and Use (ACUC) through incineration.\n\nThe in vitro studies were carried out using a comparative study design. Pentostam (Glaxo Operations (UK) Limited, Barnard Castle, UK) and amphotericin B (AmBisome®; Gilead, Foster City, CA, USA) were used as the standard drugs to compare their efficacy with those of the test extracts. RPMI-1640 and Schneider’s Drosophila media (Thermo Fisher Scientific, Waltham, Massachusetts, USA) were used as the control in in vitro experimental chemotherapeutic studies.\n\nFresh leaves of Solanum nigrum were collected from Kisii and Bungoma, Kenya (0° 40' 49.7352'' S and 34° 46' 37.4196'' E), where the plant is abundant. Plumbago capensis whose activity has been established (Makwali et al., 2015) was collected from Upper Hill area of Nairobi County, Kenya (1°17'59.0\"S, 36°48'58.0\"E) The plants were transferred to the Center of Traditional Medicine and Drug Research (CTMDR) at KEMRI (Nairobi, Kenya) and dried at 25°C in the shade until they became brittle and attained a constant weight. The dried plants were separately ground using an electric mill (Christy & Norris Ltd., Chelmford, England, model 8) into powder followed by extraction using water and analytical grade methanol (Sigma, 82762). The methanolic extracts were prepared as described by Mutoro et al. (2018a). Immediately, 100 g of ground plant material was soaked in 500 ml of analytical grade methanol for 72 h at room temperature with gentle shaking. The mixture was filtered using Whatman No.1 filter papers (Sigma, Z240079) and concentrated using a rotary evaporator (Cole-Parmer - Stuart - RE400) to obtain dry methanolic extracts. The extracts were coded as A, B and C for methanolic extracts of S. nigrum (Bungoma), S. nigrum (Kisii) and P. capensis, respectively. The aqueous extracts were also prepared as described by Mutoro et al. (2018a). Briefly, 100 g of the dried ground plant material in 600 ml of distilled water was placed in a water bath at 70°C for 1.5 h. The mixture was filtered using Whatman No.1 filter papers and then the filtrate was frozen, dried and weighed. The extracts were coded as D, E and F for P. capensis, S. nigrum (Kisii) and S. nigrum (Bungoma), respectively. The extracts were then stored at 4°C until required for bioassays.\n\nA total of six 8-week-old male inbred BALB/c mice with weights that ranged between 25 and 29 g were obtained from KEMRI. There were eight BALB/c mice per cage (Orchid scientific, SMP 01) in the animal housing kept at 23–25°C under twelve hours in light and twelve hours in dark and were fed on standard diet in the form of mouse pellets and given tap water ad libitum. The mice were handled in accordance with the regulations set by the Animal Care and Use Committee at KEMRI. The mice were used for extraction of peritoneal macrophages that were used for anti-amastigote assay and quantification of nitric oxide produced by macrophages treated with blends of extracts.\n\nThe Leishmania major strain (IDUB/KE/94=NLB-144) which was originally isolated in 1983 from a female Phlebotomus dubosqi collected from Marigat, Baringo County in Kenya were used. The parasites were grown to stationary phase at 25°C in Schneider’s Drosophila medium (Fisher Scientific, 21720024) supplemented with 20% heat-inactivated fetal bovine serum (FBS) (Hyclone® USA, SH30071031H), 100 U/ml penicillin and 500 µg/ml streptomycin (Hendricks & Wright, 1979), and 250 µg/ml 5-fluorocytosine arabinoside (Kimber et al., 1981). The stationary-phase metacyclic stage promastigotes were then harvested by centrifugation at 1500g for 15 min at 4°C (Thermo Fisher Scientific 75004061 mySPIN 6 Mini Centrifuge, 1189M94EA). The metacyclic promastigotes were then used for the in vitro assays.\n\nStock solutions of the crude plant extracts were made as described by Mutoro et al. (2018a). Briefly, plant extracts were made in Schneider’s Drosophila culture medium (Fisher Scientific, 21720024) for anti-leishmanial assays and filtered through 0.22-µm filter flasks in a laminar flow hood (Biological Safety Cabinet). The stock solutions were then stored at 4°C and retrieved later for both in vitro bioassays.\n\nThe MICs were determined as described by Wabwoba et al. (2010). Briefly, the L. major metacyclic promastigotes at concentration of 1×106 promastigotes per ml of the culture medium were treated with individual methanolic and aqueous test extracts whose concentrations were 2000 µg/ml, 1000 µg/ml, 500 µg/ml and 250 µg/ml. Similarly, the promastigotes were treated with combined extracts in fixed ratios of 2000:250, 1000:500, 500:1000 and 250:2000. The L. major promastigotes treated with the single extracts and the blends were stained with 100 µL of trypan blue dye while on a microscope slide and observed under the light microscope (XSZ-107 Series Biological Microscope, Sam-Tech Diagnostics) to check their motility and viability compared to the Schneider’s Drosophila medium as the negative control. The lowest concentration of the individual test plant extracts and the blends in which no live promastigotes were observed was the MIC and active ratio for the individual test extracts and the blends respectively.\n\nThe anti-amastigote assay was carried out as described by Mutoro et al. (2018a). The peritoneal macrophages were obtained from 4 clean BALB/c mice. The mice were anaesthetized using 100µl pentobarbitone sodium (Sagatal®; Sigma, P3761). The body surface of the mouse was disinfected with 70% ethanol after which it was torn dorso-ventrally to expose the peritoneum. 10µl of sterile cold phosphate buffered saline (PBS) was injected into the peritoneum. After injection, the peritoneum was gently massaged for 2 minutes to dislodge and release macrophages into the PBS. The peritoneal macrophages were then harvested by withdrawing the PBS. The PBS containing the macrophages was washed through centrifugation at 2,000g for 10 minutes and the pellet obtained was re-suspended in RPMI-1640 culture medium. The macrophages were adsorbed in 24-well plates for 4 hours at 37°C in 5% CO2. Non-adherent cells were washed with cold sterile PBS and the adherent macrophages were incubated overnight in RPMI culture medium. Adherent macrophages were then infected with L. major promastigotes and were further incubated at 37°C in 5% CO2 for 4 hours after which they were washed with sterile PBS to remove the free promastigotes, which were not engulfed by the macrophages. This was followed by incubation of the preparation for 24 hours in RPMI-1640 culture medium. The infected macrophages were then treated with combinations of both aqueous and methanolic extracts at fixed ratios of 500:125, 125:125 and 125:500. Pentostam and liposomal amphotericin B were used as positive control drugs to compare the parasite inhibition with that of blends of plant extracts. The medium and blends of test extracts or drug was replenished daily for 3 days. After 5 days, the macrophages were washed with sterile PBS at 37°C, fixed in methanol and stained with 10% Giemsa (Thermo Scientific™, 9990715). The number of amastigotes were determined by counting microscopically at least 100 infected macrophages in triplicate cultures, and the count was expressed as infection rate (IR) and multiplication index (MI) as described by Berman & Lee, (1984) in the formulas below;\n\nIR (%) = Number of infected macrophages per 100 macrophages.\n\nMI (%) = Number of amastigotes in experimental culture/100 macrophagesNumber of amastigotes in control culture/100 macrophages×100\n\nMeasurement of nitric oxide (NO) production was carried out as described by Gamboa-Leon et al. (2007). BALB/c mice peritoneal macrophages at a concentration of 1×105 cells per culture medium were placed in each well in 96-well microtiter plates and allowed to adhere at 37°C in 5% CO2 humidified atmosphere. 2 hours later, the peritoneal macrophages were incubated further in RPMI -1640 medium with 10% FBS for 48 hours in presence of blends of aqueous and methanolic test extracts and the controls. At least 100 µl of macrophage culture supernatants were collected and frozen until when they were required for NO measurement. NO was measured using the Greiss reaction for nitrites (NO2) as described by Hollzmuller et al., 2002. NO2 is one of the products released when the breakdown of NO occurs in the macrophages. NO in the collected supernatants is therefore estimated by quantifying the NO2 content. A nitrite standard reference curve was prepared by dispensing 50 µl of RPMI-1640 with 10% FBS into wells in rows B-H of the first 3 columns in a 96 well plate. 100µM sodium nitrite solution were added to the remaining 3 wells in row A of a 96-well micro titer plate, and immediately a six serial two fold dilutions (50 µl/well) were performed in triplicate columns 1, 2 and 3 down the plate to generate a curve that corresponded to the concentrations 100, 50, 25, 12.5, 6.25, 3.125 and 1.563 µM.\n\nSecondly, 50 µl of the sample supernatant from the wells with macrophages treated with blends of test extracts were added into the wells in triplicate at fixed ratio of 125:125. Greiss reagent A (Fisher Scientific, G7921) were dispensed to all the experimental samples and into the wells containing sodium nitrite solution. Following an incubation of 5 minutes at room temperature, 50 µl of Greiss reagent B (Fisher Scientific, G7921) in water were dispensed into all the wells and incubated for a further 5 minutes at room temperature before measuring the optical density (OD) of the purple/magenta azo compound at 520 nm using micro titer reader (PerkinElmer). The values for the standard nitrite for each of the blend of extracts were read from the standard curve plotted, by reading the NO concentration that corresponds to the absorbance values of the samples.\n\nThe data for infection rates and multiplication indices were saved as percentages and then were analyzed using SPSS 13.0 programme. The results were expressed as mean values ± standard deviation (SD). Statistical analysis were done using one way ANOVA and Tukey’s post hoc test and P values < 0.05 were considered significant.\n\n\nResults\n\nMICs for single extracts and active ratios for the blends of the extracts were detected by looking at the motility and viability of the parasites in the wells as compared to the Schneider’s Drosophila medium (SIM) as the negative control (Table 1 and Table 2).\n\nThe MICs of the individual methanolic extracts, S. nigrum from Bungoma (A), S. nigrum from Kisii (B) and P.capensis (C) against L. major promastigotes were 1mg/ml, 0.5 mg/ml and 0.25 mg/ml respectively. The MICs of the individual aqueous extracts, P.capensis (D), S. nigrum from Kisii (E) and S. nigrum from Bungoma (F) were 0.5 mg/ml, 2 mg/ml and 2 mg/ml respectively. In comparison, the MICs of pentostam and amphotericin B against L. major promastigotes were both 0.03125 mg/ml. The Schneider’s Drosophila medium supported the survival of L. major promastigotes to the maximum (Table 3).\n\nThe extracts concentration ranged from 2000 µg/ml to 250 µg/ml, while the concentration of the positive controls ranged from 125 µg/ml to 15.625 µg/ml.\n\nThe survival levels of promastigotes after treatment with blends of extracts were determined by comparing with survival in both the positive controls (Pentostam and Amphotericin B) and negative control (Schneider’s Drosophila medium). A blend of S. nigrum (Bungoma) and P. capensis (AC) and S. nigrum (Kisii) and P. capensis (BC) methanolic extracts in a ratio 250:2000 inhibited the L. major promastigotes in vitro 100% while ratios 500:1000 and 1000:500 decreased the promastigotes to minimum survival (25%) levels (+) and moderate (50%) levels (++) respectively. Ratio 2000:250 supported the survival of promastigotes to highly moderate (75%) levels (+++) when compared with the controls. A blend of S. nigrum from Bungoma and Kisii (AB) methanolic extracts was efficacious in inhibiting the parasites growth to minimum levels (+) at ratio 1:8 and there was moderate growth at ratios 1000:500 and 500:1000 (Table 4).\n\n++++ indicates maximum (100%) survival, +++ shows highly moderate (75%) survival, ++ shows moderate (50%) survival, + shows minimum (25%) survival and – indicates absence of detectable and live promastigotes when compared to both the positive and negative controls.\n\nmica = minimum inhibitory concentration which indicated the level at which the extract was inhibiting the promastigotes. S. nigrum - Solanum nigrum, P. capensis - Plumbago capensis\n\nA blend of the aqueous extracts of P. capensis and S. nigrum from Kisii (DE) at ratio of 2000:250 led to complete inhibition of parasites growth while ratios 1000:500, 500:1000 and 250:2000 supported growth of the parasites. A blend of aqueous extracts of P. capensis and S. nigrum from Bungoma (DF) at ratio 2000:250 inhibited the growth of the parasites to minimum levels (+) and a blend of S. nigrum from Kisii and Bungoma (EF) supported growth of the parasites at all ratios (Table 4).\n\nBoth pentostam and amphotericin B drugs were able to inhibit the growth of L. major promastigotes in vitro at a concentration of 31.25 µg/ml. The Schneider’s Drosophila medium, on the other hand, supported the maximum (100%) growth survival of L. major promastigotes as indicated by four pluses (++++) when compared to the positive controls, pentostam and amphotericin B (Table 4).\n\nWhen the methanolic extracts were combined in ratios of 500:125, 125:125 and 125:500, the blend of S. nigrum from Kisii and P. capensis (BC) had the lowest infection rate of 46.7% at a concentration ratio of 125:500. Similarly, the methanolic extracts combinations of S. nigrum from Bungoma and Kisii (AB) and S. nigrum from Kisii and P. capensis (AC) resulted to infection rate (IR) of 61.7% at the same ratio (Table 5).\n\nCombinations of the aqueous extracts of P. capensis and S. nigrum from Kisii (DE), P. capensis and S. nigrum from Bungoma (DF) and S. nigrum from Kisii and Bungoma (EF) at the ratio of 125:500 resulted to infection rates of 70.0%, 78.0% and 78.7% respectively. The efficacy of combined aqueous extracts S. nigrum from Kisii and Bungoma (EF) in the ratio of 125:500, in inhibiting the infectivity and multiplication of L. major amastigotes in BALB/c peritoneal macrophages in vitro was lower than all the other blends of extracts (Figure 1 and Table 5). The blend of methanolic extract of S. nigrum from Kisii and P. capensis (BC) performed the best with the multiplication index (MI) of 50.6% at the ratio of 125:500.\n\nRPMI 1640 medium which had no drug incorporated supported the growth of L. major amastigotes in peritoneal macrophages (Figure 2) more effectively and this was indicated by a high infection rate (IR) of 89.7 % (Table 5). In contrast, the leishmaniasis drugs, pentostam and liposomal amphotericin B inhibited the in vitro survival of L. major amastigotes more effectively and this corresponded to low infection rates of 26.3% and 21.0 % respectively at a concentration of 50 µg/ml (Table 4). The IRs and MIs associated with all the combined extracts were significantly different (P< 0.05) from those of pentostam and amphotericin B at all the concentrations studied.\n\nNitric oxide (NO) plays a key role as a leishmanicidal effector molecule in host macrophages (Gamboa-Leon et al., 2007). Therefore the effect of the blends of test extracts on NO production was evaluated in vitro. BALB/c mice peritoneal macrophages were incubated in RPMI-1640 medium for 48 hours in the presence or in absence of blends of test extracts. In order to determine the amount of NO triggered by the combined extracts, their optical densities (absorbencies) were determined using Griess Reagent system. All the absorbencies for the combined extracts ranged between 0.034 to 0.041 (Table 6). These OD corresponded to < 5 µM of NO in the standard nitrite curve for the blends of both the methanolic and aqueous extracts. RPMI-1640 medium produced similar negligible levels of NO (Table 7).\n\n\nDiscussion\n\nNatural products which have been found to possess antileishmanial activities can prodive alternative treatment for antimonial-resistant Leishmania strains (Monzote, 2009). In cases where the infectious agent fails to respond to single therapy, combined therapy is often adopted. Studies by Melaku et al. (2007); Nyakundi et al. (1994) and Sundar et al., 2008) on leishmaniases reported that antileishmanial drugs combination improved their efficacy and reduced resistance, the dosage required and toxicity levels.\n\nAs observed in the current study, the standard drugs were significantly more effective (P≤ 0.05) against Leishmania promastigotes and amastigotes as compared to all the blends of the extracts. It was observed in the present study that all the blends of extracts induced little (< 1.5 µM) production of NO by peritoneal macrophages which might have played a role in the amastigote inhibition.\n\nFreitas-Junior et al. (2012), study demonstrated that miltefosine and amphotericin B or paromomycin combinations were effective against antimony-resistant VL infections. A study conducted by Melaku et al. (2007) indicated that a combination of paromomycin with sodium stibogluconate was effective than when sodium stibogluconate was used alone. Similarly, a study by Ghazanfari and others (2000) demonstrated that garlic extract in combination with glucantime reduced the lesion size caused by Leishmania major more effectively than when they were alone.\n\nUse of herbal drugs as combinations has also been in practice for centuries to treat various infectious diseases. The findings of Yousefi et al. (2009), indicated that a combination of the extracts of Alkana tincturia and Pegunum harmala in a ratio of 1:1 (10:10 µg/ml) showed better in vitro effect against Leishmania major than when single extracts were used. The study conducted by Makwali et al. (2012) indicated that combination therapy with plumbaginaceae extract and triterpenoid saponin extract in combination with acridine and dinitroaniline herbicides resulted in complete clearance of parasitemia from both the lesion site and internal organs of L. major-infected BALB/c mice. Another study by Ndungu et al. (2017) revealed that the water and methanolic extracts of A. secundiflora and P. capensis can be used either separately or in combination as antileishmanial therapeutic agents.\n\n\nConclusion\n\nThis study has demonstrated that combination therapy using Plumbago capensis in combination with S. nigrum resulted in complete inhibition of the growth of L. major parasites. The emergence of antimonial-resistant Leishmania strains is on the rise and natural products and other plant products that have been tested and found to possess antileishmanial activities may provide alternative treatment. On the basis of these results and considered together with existing studies that have been reported on the safe doses and side effects, combination drug therapy is a promising approach for the treatment of L. major infection.\n\n\nData availability\n\nDataset 1: Anti-amastigote (macrophage) assays 10.5256/f1000research.15955.d217390 (Mutoro et al. 2018b)\n\nDataset 2: Quantification of nitric oxide (NO) produced 10.5256/f1000research.15955.d217391 (Mutoro et al. 2018c)",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nAbreu Miranda M, Tiossi RF, da Silva MR, et al.: In vitro leishmanicidal and cytotoxic activities of the glycoalkaloids from Solanum lycocarpum (Solanaceae) fruits. Chem Biodivers. 2013; 10(4): 642–648. PubMed Abstract | Publisher Full Text\n\nBerman JD, Lee LS: Activity of antileishmanial agents against amastigotes in human monocyte-derived macrophages and in mouse peritoneal macrophages. J Parasitol. 1984; 70(2): 220–225. PubMed Abstract | Publisher Full Text\n\nde Carvalho PB, Ferreira EI: Leishmaniasis phytotherapy. Nature's leadership against an ancient disease. Fitoterapia. 2001; 72(6): 599–618. PubMed Abstract | Publisher Full Text\n\nDesjeux P: Leishmaniasis: current situation and new perspectives. Comp Immunol Microbiol Infect Dis. 2004; 27(5): 305–18. PubMed Abstract | Publisher Full Text\n\nFirooz A, Khamesipour A, Ghoorchi MH, et al.: Imiquimod in combination with meglumine antimoniate for cutaneous leishmaniasis: a randomized assessor-blind controlled trial. Arch Dermatol. 2006; 142(12): 1575–1579. PubMed Abstract | Publisher Full Text\n\nFreitas-Junior LH, Chatelain E, Kim HA, et al.: Visceral leishmaniasis treatment: What do we have, what do we need and how to deliver it? Int J Parasitol Drugs Drug Resist. 2012; 2: 11–19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGamboa-Leon R, Aranda-Gonzalez I, Mut-Martin M, et al.: In vivo and In vitro control of Leishmania mexicana due to garlic-induced NO production. Scand J Immunol. 2007; 66(5): 508–514. PubMed Abstract | Publisher Full Text\n\nGhazanfari T, Hassan ZM, Ebtekar M, et al.: Garlic induces a shift in cytokine pattern in Leishmania major-infected BALB/c mice. Scand J Immunol. 2000; 52(5): 491–495. PubMed Abstract | Publisher Full Text\n\nHendricks L, Wright N: Diagnosis of cutaneous leishmaniasis by In vitro cultivation of saline aspirates in Schneider's Drosophila Medium. Am J Trop Med Hyg. 1979; 28(6): 962–964. PubMed Abstract | Publisher Full Text\n\nHollzmuller P, Sereno D, Cavalrero M, et al.: Nitric oxide-mediated proteasome-dependent oligonucleosomal DNA fragmentation in Leishmania amazonensis amastigotes. Infect Immun. 2002; 70(7): 3727–3735. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJha TK, Sundar S, Thakur CP, et al.: A phase II dose-ranging study of sitamaquine for the treatment of visceral leishmaniasis in India. Am J Trop Med Hyg. 2005; 73(6): 1005–11. PubMed Abstract | Publisher Full Text\n\nKimber CD, Evans DA, Robinson BL, et al.: Control of yeast contamination with 5-fluorocytosine in the in vitro cultivation of Leishmania spp. Ann Trop Med Parasitol. 1981; 75(4): 453–454. PubMed Abstract | Publisher Full Text\n\nMakwali JA, Wanjala FM, Kahuri JC, et al.: Combination and monotherapy of Leishmania major infection in BALB/c mice using plant extracts and herbicides. J Vector Borne Dis. 2012; 49(3): 123–130. PubMed Abstract\n\nMakwali JA, Wanjala FME, Ingonga J, et al.: In vitro Studies on the Antileishmanial Activity of Herbicides and Plant Extracts Against Leishmania major Parasites. Res J Med Plants. 2015; 9(3): 90–104. Publisher Full Text\n\nMelaku Y, Collin SM, Keus K, et al.: Treatment of kala-azar in southern Sudan using a 17-day regimen of sodium stibogluconate combined with paromomycin: a retrospective comparison with 30-day sodium stibogluconate monotherapy. Am J Trop Med Hyg. 2007; 77(1): 89–94. PubMed Abstract\n\nMishra BB, Kale RR, Prasad V, et al.: Scope of natural products in fighting against leishmaniasis. Opportunity, Challenge and Scope of Natural Products in Medicinal Chemistry Journal. 2011; 121–154. Reference Source\n\nMonzote L: Current treatment of Leishmaniasis: A review. The Open Antimicrobial Agents Journal. 2009; 1: 9–19. Reference Source\n\nMutoro CN, Kinyua JK, Ng'ang'a JK, et al.: In vitro study of the efficacy of Solanum nigrum against Leishmania major [version 1; referees: 1 approved with reservations]. F1000Res. 2018a; 7: 1329. Publisher Full Text\n\nMutoro CN, Kinyua J, Ng'ang'a J, et al.: Dataset 1 in: Efficacy of the combination of crude extracts of Solanum nigrum and Plumbago capensis on Leishmania major. F1000Research. 2018b. http://www.doi.org/10.5256/f1000research.15955.d217390\n\nMutoro CN, Kinyua J, Ng'ang'a J, et al.: Dataset 2 in: Efficacy of the combination of crude extracts of Solanum nigrum and Plumbago capensis on Leishmania major. F1000Research. 2018c. http://www.doi.org/10.5256/f1000research.15955.d217391\n\nNdungu PK, Ingonga JM, Gicheru M, et al.: Efficacy of a Combination of Plumbago capensis and Aloe secundiflora aqueous and methanolic Plant Extracts in the Treatment of Leishmania Major in Balb/C Mice. Ann Appl Microbiol Biotechnol J. 2017; 1(1): 1001. Reference Source\n\nNyakundi PM, Wasunna KM, Rashid JR, et al.: Is one year follow-up justified in kala-azar post-treatment? East Afr Med J. 1994; 71(7): 453–459. PubMed Abstract\n\nPiscopo TV, Mallia AC: Leishmaniasis. Postgrad Med J. 2006; 82(972): 649–657. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSundar S, Rai M, Chakravarty J, et al.: New treatment approach in Indian visceral leishmaniasis: single-dose liposomal amphotericin B followed by short-course oral miltefosine. Clin Infect Dis. 2008; 47(8): 1000–6. PubMed Abstract | Publisher Full Text\n\nWabwoba BW, Anjili CO, Ngeiywa MM, et al.: Experimental chemotherapy with Allium sativum (Liliaceae) methanolic extract in rodents infected with Leishmania major and Leishmania donovani. J Vector Borne Dis. 2010; 47(3): 160–167. PubMed Abstract\n\nWorld Health Organization: The leishmaniases and Leishmania/HIV co-infections. 2006.\n\nYousefi R, Ghaffarifar F, Asl AD: The effect of Alkanna tincturia and Peganum harmala extracts on Leishmania major (MRHO/IR/75/ER) In vitro. Iran J Parasitol. 2009; 4(1): 40–47. Reference Source"
}
|
[
{
"id": "39475",
"date": "06 Nov 2018",
"name": "Edilene O. Silva",
"expertise": [
"Reviewer Expertise natural products against leishmaniasis"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript \"Efficacy of the combination of crude extracts of Solanum nigrum and Plumbago capensis on Leishmania major\" by Mutoro et al submitted to the F1000Research deals with an important subject, that of bioactive compounds extraction from plants and their anti-protozoa activities.\nThe results reported here should be useful to the community in searching for more effective drugs to treat Leishmaniasis. The manuscript is well written but the overall approach is quite incomplete. Thus, I have a few questions/concerns about the current work, as detailed below, which should be addressed before indexing. For any compound to be evaluated it is important that the methodology used is very clearly defined, and unfortunately this paper major lacunae is just that. In the Methodology section, the authors suggest that the leishmanicidal activity of the assayed extracts could be due to the presence of synergy between them. Could the authors provide the phytochemical profile of the assayed extracts?\n\nResults are not properly presented and discussed. The data are presented in seven table and two figures, however not properly presented, analyzed and interpreted. Anti promastigote activity was measured using L. amazonensis promastigotes (106 parasites/mL) and cell viability tested daily for 5 days using light microscopy. My first objection is in the use of light microscopy to measure parasite viability; it is very error prone owing to the motility of parasites, and because better semi quantitative methods are available (MTT or Alamar Blue Assay). Later in the anti-amastigote assay, they have exposed infected macrophages to extracts, daily for 3 days post infection and cells were fixed and stained after 5 days? I am unable to understand how this approach was adopted. I think the test approach is flawed and needs to be reviewed. In addition, the authors should be performed the cytotoxicity assay.\nHow was in vitro dose decided? Why were high concentrations of the extracts (2000 μg/mL, 1000 μg/m, 500μg/mL and 250μg/mL) used?\nWhen data is uneven, judgment about its value is difficult. Thus, several concerns can be raised about the value of this work.\nLater, in the discussion, they state that “As observed in the current study, the standard drugs were significantly more effective (P≤ 0.05) against Leishmania promastigotes and amastigotes as compared to all the blends of the extracts. It was observed in the present study that all the blends of extracts induced little (< 1.5 µM) production of NO by peritoneal macrophages which might have played a role in the amastigote inhibition”. So the reader ends up totally confused and demotivated to proceed to the conclusion section. Some paragraphs the authors only revise the literature. However, they do not present a discussion. This is necessary to provide the manuscript of more scientific soundness.\nOne way to proceed would be to request the authors to calculate do cytotoxicity test against host cell and parasites using MTT or Alamar blue assay (necessary to calculate IC50 ). In addition, to determine phytochemical profile of the assayed extracts.\nMinor mistake: English language usage: Methods section: The word “extraction” is not used correctly, I would advise that the author use the word ‘isolation’ instead as it is more commonly used.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
},
{
"id": "39733",
"date": "04 Dec 2018",
"name": "Radheshyam Maurya",
"expertise": [
"Reviewer Expertise Mechanism of infection & Immunity of Visceral leishmaniasis"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe table number 1 & 2 in the results of this research article is incomplete; there is no proper labelling and description for the table. What indicating the – and + sign in the table? The other indication that needs in the second table is what representing A, B, C, D, E, F? The table number 3rd shows the MIC assay, in this assay test extracts and positive control is there but what about the negative control that is only Schneider Drosophila Medium? How the concentration where fixed for extract and positive control because the deviation of concentration is large between positive control and extract (extract 2000 µg to 250 µg, and for Pento and Ampho B, it is 150 µg to 15.625 µg). The concentration of extract used here is too high. The table number 3rd shows a significant difference of MIC (mg/ml) for same plant extract of S. Nigrum from the different geographical area that is Kisii and Bungoma if it is so why not for another species of plant that is P. Capensis? It is collected from only one region. The table 4 indicating that the blending ratio of the plant extract inhibit the parasitic survival and there is no clear evidence to support the inhibitory effect of promastigote is really by the synergistic effect of both plant, or the quantity of active compound present in different plant extract is in different ratio? In this article, the results are missing the phytochemical profile (Gas chromatographic profile and MALDI -TOF analysis) for the identification of the active compound involving the inhibitory effect of the parasite. In vitro experiment, there is no clear cut idea about the drug carrier (vehicle of drug) and vehicle control. The table number 6 and 7 shows the NO assay, but it is not a significant increase in between control groups (both positive and negative). The manuscript lacking the immunomodulatory action of plant extract. In there in-vitro assay they should also have measured IFNg and IL 10 cytokines upon extract treatment to see the shift of Th1 and Th2 dichotomy.\n\nThe manuscript lacks the background knowledge about S. nigrum and P. capensis, about the medicinal property, general uses etc. and specify why they selected these two plants? The result is lacking cytotoxic assay and SI index of plant extracts.\n5th December: A sentence has been edited in point 6 to make the sentence clearer and in point 8, a sentence has been added regarding the in vitro assay. Both of which were accidentally omitted from the original report.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1556
|
https://f1000research.com/articles/7-1553/v1
|
26 Sep 18
|
{
"type": "Research Note",
"title": "Association between intermittent administration of parathyroid hormone 1-34 and ectopic calcification in rats",
"authors": [
"Israa Ahmed Radwan",
"Nahed Sedky Korany",
"Bassant Adel Ezzat",
"Nahed Sedky Korany",
"Bassant Adel Ezzat"
],
"abstract": "The present study was conducted to determine the association between parathyroid hormone 1-34 administration and ectopic calcification in rats with glucocorticoid induced osteoporosis. A total of 18 rats were used in the current study. Osteoporosis was induced in all rats via dexamethasone administration, then rats were randomly distributed into Control and Forteo groups and were sacrificed 4 weeks after initiation of drug administration. Hemi-mandibles were decalcified followed by routine histological analysis. Among the Forteo group rats, three rats displayed the presence of ectopic calcification. True pulp stone, intra-pulpal calcified structure with entrapped cells and intra periodontal bone-like calcified structure with entrapped cells were observed while no ectopic calcification was noticed in the control group.",
"keywords": [
"ectopic calcification",
"pulp stone",
"denticles",
"recombinant parathyroid hormone 1-34."
],
"content": "Introduction\n\nEctopic calcification is pathologic deposition of minerals within soft tissues as dental pulp or periodontal ligaments (PDL)1,2. Pulpal ectopic calcification may manifest as generalized, linear calcification, or as circumscribed calcification (also known as pulp stones or denticles). Pulp stones can be seen free within the pulp tissues, partially associated with dentin wall or completely embedded in dentin. They may manifest as false concentric calcification or true pulp stones3. The etiology of pulp calcification may be idiopathic4, although it may also be associated with pulp injury or degeneration5, orthodontic or physical forces6–8 or chemical stimuli9,10. Its incidence tends to increase with age3,11.\n\nParathyroid hormone (PTH) is a naturally occurring hormone, important for calcium homeostasis12,13. Its level in the blood dictates its effect on the skeletal system, with bone catabolic effect upon chronic increase in PTH level associated with hyperparathyroidism13,14 and bone anabolic effect upon administration of small intermittent dosage15,16. PTH secreted by the parathyroid gland (native PTH) is a polypeptide chain composed of 84 amino acids (PTH 1-84). While PTH 1–34, is a fragment of PTH molecule “synthetised through recombinant DNA technology using a strain of Escherichia coli bacteria”17,18. Intermittent PTH 1–34 administration, owing to its bone anabolic effect, is successfully used for the management of osteoporosis19–22. Its bone anabolic effect has been linked to elevated osteoblast differentiation, number and activity23–29.\n\n\nMethods\n\nThis research was conducted as a part of a study examining the effect of PTH 1–34 on microarchitecture of alveolar process of osteoporotic rats.\n\nIn the current research, 18 male Wistar rats of the species Rattus norvegicus weighing 175–200 gm, aged between 3 to 4 months were used. The animals were acquired from and maintained in the Animal House, Faculty of Medicine, Cairo University under the care of a specialized veterinarian. Each animal was kept in a separate cage. They were maintained under controlled temperature at 25±2°C with 12 h light/dark cycle and had ad libitum access to standard rats’ chow and water. This study was approved by the Research Ethics committee faculty of Dentistry, Cairo University (approval number 151028).\n\nOsteoporosis was induced in all experimental animals (n=18) by five weekly doses, of 7 mg/kg body weight dexamethasone sodium phosphate (Decadron® 4 mg/ml, Eipico Egypt), administered intramuscularly30,31. The animals were then randomly distributed by random sequence generator program (randomizer.org) into two groups each including 9 animals, matching of the animals with the numbers was done blindly through the primary investigator. Animals received either a daily subcutaneous injection of 60 μg/kg body weight PTH 1–34 (Forteo®; Eli lilly Pharmaceuticals) (n=9)32 or an equal volume of saline (control group) (n=9). Drugs were administered in the early morning hours (8–9 am). The body weight of the animals was measured weekly, and drug dosages were adjusted accordingly. Animals were euthanized with an intra-cardiac overdose of sodium thiopental (80 mg/kg) 4 weeks after initiation of Forteo administration. Mandibles were dissected and separated into two halves, only one hemi-mandible from each rat was utilized for histological examination. The experimental unit was the hemimandible of rats. The primary investigator was blinded.\n\nHemi-mandibles (n=18) were fixed in 10% calcium formol solution for 48 hours. The specimens were then washed and soaked in 10% EDTA for 4–5 weeks for decalcification. After decalcification was completed, the specimens were dehydrated in ascending grades of alcohol, cleaned in xylol, and then embedded in paraffin blocks. Next, 6-µm-thick paraffin sections were cut and mounted on a clean glass slide, then stained with haematoxylin and eosin stain31. The specimens were examined using Leica DM300 light microscopic (Leica Microsystems, Inc., Switzerland). Histological examination was done through blinded primary investigator. Dental pulp and surrounding periodontal ligaments of all teeth within hemimandible of both experimental groups (n=18) were examined for the presence of ectopic calcification.\n\n\nResults\n\nUpon histological examination of the Forteo group specimens, six rats showed normal pulp and periodontal ligaments with no ectopic calcifications, while ectopic calcifications were detected in three specimens (Dataset 1)33. Where true pulp stone with pre-dentin and dentin surrounding a central cavity lined by cells was detected in one specimen (Figure 1a). Another specimen showed the presence of intra-pulpal calcified structure with entrapped cells (Figure 1b). Meanwhile, one specimen displayed the presence of intra-periodontal bone-like calcified structure with entrapped cells (Figure 1c). On the other hand, no ectopic calcification was perceived in the control group specimens (Dataset 1)33, which showed normal pulp and periodontal ligaments (n=9) (Figure 1d).\n\n(a) A true pulp stone with dentin, pre dentin and central cavity lined by cells (original magnification, x100 (left) and x400 (right)). (b) Intrapulpal calcification with entrapped cells (original magnification, x100 (left) and x400 (right)). (c) Intra-periodontal ectopic calcification with entrapped cells surrounded with disorganized periodontal ligaments (original magnification, x400). (d) Light microscope image of the control group showing normal pulp and periodontal ligaments with no ectopic calcification (original magnification x 100).\n\n\nDiscussion\n\nDespite the fact that PTH 1–34 can successfully lower blood calcium level, and prevent vascular calcification34, in the current work, PTH 1–34 was associated with ectopic calcifications within the pulp and PDL, while none was observed in the control group specimens.\n\nGuimaraes et al. observed increased dentin deposition rate and elevated level of serum alkaline phosphatase in PTH 1–34 treated rats35. In a subsequent research, Guimaraes et al. elucidated that PTH 1–34 can regulate odontoblast like cells via protein kinase A- and protein kinase C-dependent pathways, with increases in odontoblast-like cells proliferation upon short PTH exposure and increases in their apoptosis upon longer exposure36.\n\nWang et al. demonstrated the ability of PTH to induce human PDL stem cells to differentiate into osteoblasts, which was associated with increased alkaline phosphatase activity and increased mineralization capacity37. Moreover, Li et al. described the ability of PTH 1–34 to induce the formation of calcified nodule in cementoblast cell line, which was attributed to the ability of the drug to increase cementoblast activity, alkaline phosphatase level and subsequently calcification38.\n\nThe stimulatory effect of PTH 1–34 on odontoblast, cementoblast and osteoblasts function can help explain the findings of the current research.\n\nFurther research studying the effect of different dosage schemes of PTH 1–34, administered for different time periods, on dental pulp and odontoblast cells is recommended.\n\n\nData availability\n\nDataset 1. Images captured from each mouse in each group not shown in Figure 1.DOI: https://doi.org/10.5256/f1000research.16298.d21852333.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nMaeda H, Nakano T, Tomokiyo A, et al.: Mineral trioxide aggregate induces bone morphogenetic protein-2 expression and calcification in human periodontal ligament cells. J Endod. Elsevier; 2010; 36(4): 647–52. PubMed Abstract | Publisher Full Text\n\nGiachelli CM: Ectopic calcification: gathering hard facts about soft tissue mineralization. Am J Pathol. American Society for Investigative Pathology; 1999; 154(3): 671–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNanci A: Ten Cate’s Oral Histology: development, structure and function. 8th edition. St Louis: El Sevier Mosby; 2013; 201. Reference Source\n\nSiskos GJ, Georgopoulou M: Unusual case of general pulp calcification (pulp stones) in a young Greek girl. Endod Dent Traumatol. Denmark; 1990; 6(6): 282–4. PubMed Abstract | Publisher Full Text\n\nKannan S, Kannepady SK, Muthu K, et al.: Radiographic assessment of the prevalence of pulp stones in the south Indian population. J Endod. Elsevier Ltd; 2015; 6(5): 209–13.\n\nErtas ET, Veli I, Akin M, et al.: Dental pulp stone formation during orthodontic treatment: A retrospective clinical follow-up study. Niger J Clin Pract. 2017; 20(1): 37–42. PubMed Abstract | Publisher Full Text\n\nSübay RK, Kaya H, Tarim B, et al.: Response of human pulpal tissue to orthodontic extrusive applications. J Endod. United States; 2001; 27(8): 508–11. PubMed Abstract | Publisher Full Text\n\nRobertson A, Lundgren T, Andreasen JO, et al.: Pulp calcifications in traumatized primary incisors. A morphological and inductive analysis study. Eur J Oral Sci. England; 1997; 105(3): 196–206. PubMed Abstract | Publisher Full Text\n\nHoltgrave EA, Hopfenmüller W, Ammar S: Tablet fluoridation influences the calcification of primary tooth pulp. J Orofac Orthop. Germany; 2001; 62(1): 22–35. PubMed Abstract | Publisher Full Text\n\nHoltgrave EA, Hopfenmüller W, Ammar S: Abnormal pulp calcification in primary molars after fluoride supplementation. ASDC J Dent Child. United States; 2002; 69(2): 201–206, 126. PubMed Abstract\n\nHillmann G, Geurtsen W: Light-microscopical investigation of the distribution of extracellular matrix molecules and calcifications in human dental pulps of various ages. Cell Tissue Res. Germany; 1997; 289(1): 145–54. PubMed Abstract | Publisher Full Text\n\nHodsman AB, Bauer DC, Dempster DW, et al.: Parathyroid hormone and teriparatide for the treatment of osteoporosis: a review of the evidence and suggested guidelines for its use. Endocr Rev. United States; 2005; 26(5): 688–703. PubMed Abstract | Publisher Full Text\n\nMarieb EN, Hoehn K: The Endocrine System. In: Beauparlant S editor. Human Anatomy & Physiology Eighth Edition. eighth. Benjamin Cummings; 2007; 616–8.\n\nDempster DW, Parisien M, Silverberg SJ, et al.: On the mechanism of cancellous bone preservation in postmenopausal women with mild primary hyperparathyroidism. J Clin Endocrinol Metab. 1999; 84(5): 1562–6. PubMed Abstract | Publisher Full Text\n\nDobnig H, Turner RT: The effects of programmed administration of human parathyroid hormone fragment (1-34) on bone histomorphometry and serum chemistry in rats. Endocrinology. 1997; 138(11): 4607–12. PubMed Abstract | Publisher Full Text\n\nDempster DW, Cosman F, Kurland ES, et al.: Effects of daily treatment with parathyroid hormone on bone microarchitecture and turnover in patients with osteoporosis: a paired biopsy study. J Bone Miner Res. 2001; 16(10): 1846–53. PubMed Abstract | Publisher Full Text\n\nBringhurst FR, Demay MB, Kronenberg Hm: Hormones and disorders of mineral metabolism. In: Kronenberg H, Shlomo M, Polonsky K, Larsen. P, editors. Williams textbook of endocrinology. 11 th edit. WB Saunders, Philadelphia, Pa USA,; 2008; 1203–68.\n\nLilly E: drug brochure provided by Eli Lilly. 2002. Reference Source\n\nNeer RM, Arnaud CD, Zanchetta JR, et al.: Effect of parathyroid hormone (1-34) on fractures and bone mineral density in postmenopausal women with osteoporosis. N Engl J Med. 2001; 344(19): 1434–41. PubMed Abstract | Publisher Full Text\n\nOshima M, Inoue K, Nakajima K, et al.: Functional tooth restoration by next-generation bio-hybrid implant as a bio-hybrid artificial organ replacement therapy. Sci Rep. Nature Publishing Group; 2014; 4: 6044. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSone T, Ito M, Fukunaga M, et al.: The effects of once-weekly teriparatide on hip geometry assessed by hip structural analysis in postmenopausal osteoporotic women with high fracture risk. Bone. Elsevier; 2014; 64: 75–81. PubMed Abstract | Publisher Full Text\n\nGlüer CC, Marin F, Ringe JD, et al.: Comparative effects of teriparatide and risedronate in glucocorticoid-induced osteoporosis in men: 18-month results of the EuroGIOPs trial. J Bone Miner Res. 2013; 28(6): 1355–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJilka RL: Molecular and cellular mechanisms of the anabolic effect of intermittent PTH. Bone. United States; 2007; 40(6): 1434–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDobnig H, Turner RT: Evidence that intermittent treatment with parathyroid hormone increases bone formation in adult rats by activation of bone lining cells. Endocrinology. 1995; 136(8): 3632–8. PubMed Abstract | Publisher Full Text\n\nJilka RL, Weinstein RS, Bellido T, et al.: Increased bone formation by prevention of osteoblast apoptosis with parathyroid hormone. J Clin Invest. 1999; 104(4): 439–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBellido T, Ali AA, Plotkin LI, et al.: Proteasomal degradation of Runx2 shortens parathyroid hormone-induced anti-apoptotic signaling in osteoblasts. A putative explanation for why intermittent administration is needed for bone anabolism. J Biol Chem. American Society for Biochemistry and Molecular Biology; 2003; 278(50): 50259–72. PubMed Abstract | Publisher Full Text\n\nIshizuya T, Yokose S, Hori M, et al.: Parathyroid hormone exerts disparate effects on osteoblast differentiation depending on exposure time in rat osteoblastic cells. J Clin Invest. 1997; 99(12): 2961–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen HL, Demiralp B, Schneider A, et al.: Parathyroid hormone and parathyroid hormone-related protein exert both pro- and anti-apoptotic effects in mesenchymal cells. J Biol Chem. United States; 2002; 277(22): 19374–81. PubMed Abstract | Publisher Full Text\n\nPettway GJ, Meganck JA, Koh AJ, et al.: Parathyroid hormone mediates bone growth through the regulation of osteoblast proliferation and differentiation. Bone. 2008; 42(4): 806–18. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLucinda LM, Vieira BJ, Oliveira TT, et al.: Evidences of osteoporosis improvement in Wistar rats treated with Ginkgo biloba extract: a histomorphometric study of mandible and femur. Fitoterapia. 2010; 81(8): 982–7. PubMed Abstract | Publisher Full Text\n\nThakur RS, Pawara RS, Ahirwar B: Evaluation of Saraca indica for the management of dexamethasone-induced osteoporosis. J Acute Med. Elsevier Taiwan LLC; 2016; 6(1): 7–10. Publisher Full Text\n\nSkripitz R, Johansson HR, Ulrich SD, et al.: Effect of alendronate and intermittent parathyroid hormone on implant fixation in ovariectomized rats. J Orthop Sci. 2009; 14(2): 138–43. PubMed Abstract | Publisher Full Text\n\nAhmed Radwan I, Sedky Korany N, Adel Ezzat B: Dataset 1 in: Association between intermittent administration of parathyroid hormone 1-34 and ectopic calcification in rats. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16298.d218523\n\nWu M, Rementer C, Giachelli CM: Vascular calcification: an update on mechanisms and challenges in treatment. Calcif Tissue Int. 2014; 93(4): 365–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuimarães GN, Cardoso GB, Navesc LZ, et al.: Short-term PTH administration increases dentine apposition and microhardness in mice. Arch Oral Biol. 2012; 57(10): 1313–9. PubMed Abstract | Publisher Full Text\n\nGuimarães GN, Rodrigues TL, de Souza AP, et al.: Parathyroid hormone (1-34) modulates odontoblast proliferation and apoptosis via PKA and PKC-dependent pathways. Calcif Tissue Int. 2014; 95(3): 275–81. PubMed Abstract | Publisher Full Text\n\nWang X, Wang Y, Dai X, et al.: Effects of Intermittent Administration of Parathyroid Hormone (1-34) on Bone Differentiation in Stromal Precursor Antigen-1 Positive Human Periodontal Ligament Stem Cells. Stem Cells Int. 2016; 2016: 4027542. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi Y, Hu Z, Zhou C, et al.: Intermittent parathyroid hormone (PTH) promotes cementogenesis and alleviates the catabolic effects of mechanical strain in cementoblasts. BMC Cell Biol. BMC Cell Biology; 2017; 18(1): 19. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "38725",
"date": "01 Oct 2018",
"name": "Mohamed Shamel",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe current study performed by the authors is an interesting one which aims to determine the association between parathyroid hormone 1-34 administration and ectopic calcification formation in pulp and periodontal ligaments of osteoporotic rats.\nResults revealed that rats receiving PTH 1–34 showed ectopic calcifications in their pulp and periodontal ligaments.\nThe study is well organized and the authors well documented their work in particular the images captured from rats of each group.\nHowever I have some minor remarks as follows:\nAim: I suggest that the aim should include that the investigations were carried on dental structures (Pulp and periodontal ligaments)\nResults: I suggest that the histology of the calcified areas to be thoroughly examined to determine if its histology resembles dentin, bone or cementum in each area.\nDiscussion: More information is needed to determine the mechanism by which the PTH 1–34 is involved in stimulation of odontoblast, cementoblast and osteoblasts functions.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "50321",
"date": "15 Jul 2019",
"name": "Suzan A. Kamel-ElSayed",
"expertise": [
"Reviewer Expertise Bone Biology"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript describes the histological findings of the dental pulp and periodontal ligaments after intermittent administration of PTH (1-34) to induced osteoporotic male rats. I think the manuscript requires additional data to be acceptable. The followings are my comments:\nThe title should include the word microarchitecture or histology to reflect the manuscript’s content.\n\nData for ALP level or bone density measurement or ash value is required to confirm that osteoporosis was induced in all male rats.\n\nThe results should include images from both Forteo treated group and saline treated group. Although it is stated that the dataset included images captured from each mouse in each group, I did not find any image from control group. The names of the images included the word \"Forteo\" and thus I assumed that all images are captured from the treated group only. In addition, figure 1 and the supplemented images should include arrows that describe different parts e.g. dentin, pulp, periodontal, different calcifications (intrapulpal, intraperiodontoal …etc).\n\nThe discussion should include a possible explanation of why only 3 out of 9 rats developed ectopic calcification following PTH (1-34) intermittent injection and how the authors excluded the possibility of a prior existence of ectopic calcification.\n\nDid the treatment improve the microarchitecture of the mandible of all osteoporotic treated rats? (please see the additional citation1)\n\nResults of the submitted manuscript should not be included as a reference (# 33).\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1553
|
https://f1000research.com/articles/7-1547/v1
|
25 Sep 18
|
{
"type": "Research Article",
"title": "Design and implementation of semester long project and problem based bioinformatics course",
"authors": [
"Geetha Saarunya",
"Bert Ely",
"Bert Ely"
],
"abstract": "Background: Advancements in ‘high-throughput technologies’ have inundated us with data across disciplines. As a result, there is a bottleneck in addressing the demand for analyzing data and training of ‘next generation data scientists’. Methods: In response to this need, the authors designed a single semester “Bioinformatics” course that introduced a small cohort of students at the University of South Carolina to methods for analyzing data generated through different ‘omic’ platforms using variety of model systems. The course was divided into seven modules with each module ending with a problem. Results: Towards the end of the course, the students each designed a project that allowed them to pursue their individual interests. These completed projects were presented as talks and posters at ISCB-RSG-SEUSA symposium held at University of South Carolina. Conclusions: An important outcome of this course design was that the students acquired the basic skills to critically evaluate the reporting and interpretation of data of a problem or a project during the symposium.",
"keywords": [
"bioinformatics education",
"problem-based learning",
"project-based learning",
"hands-on course"
],
"content": "Introduction\n\nBioinformatics is a rapidly growing interdisciplinary field because of advances in both computer science and the life sciences. Rapid advances in sequencing technologies have led to a deluge of biological data, creating a need for expeditious, efficient, and effective analyses. Practioners of bioinformatics now add techniques from statistics, information science and engineering to develop algorithms and build predictive models to understand the dynamics within a biological system. This paradigm shift in how bioinformatics is perceived has resulted in an evolutionary model of growth across both of its root disciplines1. Bioinformatics as a field also enjoys a degree of duality: “episteme” (scientific knowledge) and “techne” (technical know-how), leading to the idea of ‘Science informing the tools and the tools enabling science’1. In a 2017 survey of 704 NSF principal investigators, more than 90% of respondents replied that they were soon to be working with data sets that required high-performance computing, and they also identified bioinformatics data analyses to be the most urgent and unmet need required for successful completion of their projects2. Increased exposure of students at an undergraduate level will help address the need for specialists working in this field and also make the students attractive for opportunities in industry or in graduate school3–5. The Global Organization for Bioinformatics Learning, Education and Training (GOBLET) identified through surveys that the skills required for ‘basic data stewardship’ are taught only in ~ 25% of education programs creating a gulf between theory and practice6–8.\n\nMany courses have been designed and implemented to address the gaps faced in the field. They are project based, problem based or a combination of both to study one or more ‘next-generation’ datasets9–12. The courses have been designed as workshops9 or as semester long courses using analyses from a single next-generation technology10. The authors haven’t come across a course that incorporates multi-omics data analyses in a single semester. There have been studies that address a single problem using multi-omics approaches11 and there have been pipeline designs that help integrate these data under a single platform12.\n\nIn response to this need, we designed a single semester course on bioinformatics in the Department of Biological Sciences at University of South Carolina that was targeted towards undergraduate seniors and graduate students who were mainly bench scientists working on experiments which generated data across different ‘omic technologies’ using different living systems.\n\nThe curriculum task force of the ‘International Society of Computational Biology’, a scholarly society for both bioinformatics and computational biology research scientists across the world, identified a set of 16 core competencies established through surveys and an iterative process of inputs from people associated with the fields of bioinformatics and computational biology13.\n\nHowever, one of the biggest challenges is the heterogeneity of the backgrounds of the course participants. There is ‘no one size fits all’ while designing a bioinformatics course. In fact, there are three different types of user groups that employ bioinformatics in their research (Table 1), and each of these user groups requires different competencies14,15.\n\nThus, there was considerable diversity in the backgrounds of the students registered for our course. In response, we chose to follow a ‘learner adaptable’ style of design of the curriculum. This approach allowed us to design the course based on the students’ knowledge of the subject and their expectations of the course.\n\n\nMethods\n\nCourse conception. This course was designed to provide a structured Bioinformatics course that is geared towards the needs of students working on different “omics” experiments. The general premise of the course was to critically examine and analyze published or in-preparation datasets across different biological systems in a hands-on fashion. In addition, we wanted to introduce the students to the R programming language.\n\nCourse Participants. We had nine participants registered for the course. Four of the students were undergraduate seniors, four were first or second year graduate students and one of them was an emergency medical technician (EMT) with a Bachelor of Science degree who was taking additional classes for credit and is now in medical school.\n\nLearning objectives and outcomes of the course. We sent a three-question survey (Table 2) to all the participants to understand their reasoning for registering in the course.\n\n*Since we did not have this information in the pre-class survey answers, we asked students their experience with programming languages in class. We got 7 responses in total to the pre-lab survey.\n\nThe primary learning objective of the course was to introduce the students to the breadth and depth of the field of Bioinformatics for ‘omics’ data analyses. We also identified the following three course outcomes for the students.\n\nI. At the end of the course, students should be able to identify and implement alternate strategies to answer genomics-based research questions.\n\nII. Students should be comfortable with the use open-source genomic software and command line programming, and be able to use R statistical packages.\n\nIII. Students should be able to design and troubleshoot analyses of nucleotide sequence data and elicit biological information from the data.\n\nThe course was divided into seven modules spread across the semester: Genome assembly and annotation, Comparative genomics, Introduction to Statistics, Metagenomics, Transcriptomics, Proteomics and Cancer data analysis. Each module ended with a graded research problem either in a prokaryotic system or a eukaryotic system (Table 3 and Supplementary File 1).\n\n*All the presentations associated with each module, course assignments and problem assignments are available for access in the supplementary section of the paper. The final projects that were presented as posters and talks are not available for access at this time.\n\n\nResults\n\nBased on the responses of the students, we assigned potential user groups as explained in Table 1 at the start of the class with their expected competency levels at the end of the class. Seven students replied and two students did not reply to the pre-course survey. We were able to obtain permission from six of the seven students who replied to the survey to have their answers published online anonymously. Any identifying information in terms of names or project details have been edited from the responses (Table 4).\n\nSuccessful completion of the project assigned to every student by the end of a course module determined their competency of the course. In lieu of a final exam, each student designed a research project, conducted appropriate analyses, and summarized their results in the form of a poster or a talk at the end of the semester as part of the ISCB-RSG-SE USA (International society of Computational biology-Regional student group-Southeast USA) conference held on campus on Dec 8/9 of 2017. They also had the opportunity to listen to talks from professors working on bioinformatics projects and interacted with their peers from University of South Florida and University of Alabama. In addition, two graduate students wrote papers on their projects with input from their respective research advisors.\n\n\nDiscussion\n\nThis course covered a lot of topics in 13 weeks and some degree of mastery was required for each topic. In addition, half of the students had no familiarity with programming. As a result, many of the students were stretched beyond their comfort zone. However, since this was a small class, we were able to work with the students individually to help them be successful, and also tailor projects to the students’ backgrounds and expectations. An important outcome of this course design was that the students acquired the basic skills to critically evaluate the reporting and interpretation of data of a problem or a project during the symposium.\n\nOur leading goal was to develop a course that was responsive to the needs and background abilities of the participating students. It is important to recognize that every course will have students at different levels of learning with different goals. Hence when designing a course that caters to the needs of the students, it may be a good idea to have a small class.\n\nIn our class, every student had a different learning curve. We determined the competency of a student per module by their successful completion of the problem set and or the project. The first objective of the course was to expose the students to not just one living system but many including Bacterial, Human, Drosophila. The other objective was to introduce the students to the R computational platform20. Our initial challenge was to address the problems faced by the students in using the platform for the first time. We wanted the students to understand the intricacies of using R as a programming language but if we repeat this class, we will have the codes for the students as R- markdown documents. We would also have additional R assignments at the beginning of the course and out of class help sessions to help students get comfortable using R.\n\nA major challenge was to identify ways to map the competencies required to the expectations of the course at both the undergraduate and graduate levels. Since we had a small number of students, we designed and delivered a structured curriculum that integrated both the continuously changing and stable technological platforms using model systems that were used by at least one student for every module.\n\nAs the important goal of the course was to address the needs of the students, we designed the current model of ‘multi-project’ modules of biological data analyses. Due to the small class size, we were able to give personalized attention to every student. In the future, a big change that we would incorporate would be to separate the projects and problems assigned to graduate and undergraduate students. Generally, the undergraduate students do not have their own data while the graduate students usually have or are in the process of obtaining data that they want to analyze. Therefore, we would either have separate sections for the graduate and undergraduate students or we would have a combined lecture but separate recitation section where the students would apply what they have learned in the lecture portion of the class. The graduate students would be encouraged to develop projects that are relevant to their research while the undergraduates would work in groups on projects designed by the instructor.\n\n• This course was designed to address the students need to analyze ‘omic’ data sets at University of South Carolina\n\n• It was divided into seven modules with practical tasks at the end of each module.\n\n• Students designed their projects and presented it as papers, posters and talks at The ISCB- RSG-SEUSA symposium.\n\n\nData availability\n\nDataset 1: Pre-class surveys 10.5256/f1000research.16310.d21886325\n\nDataset 2: Post-class surveys 10.5256/f1000research.16310.d21886426\n\n\nEthical considerations\n\nThe authors have posted the pre-class survey answers of students who have consented to have their responses published anonymously. All identifying information has been edited from the responses. The post–class survey responses are given as a feedback to the instructors, also anonymously, through an online survey carried out by the university.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors would like to thank Dr. Phillip Buckhaults for the design, conception and delivery of the lectures on “Cancer Genomics”. The authors would also like to thank all the attendees, participants and professors of the Departments of Biological Sciences and Computer Science of University of South Carolina for participating in the first ‘ISCB-RSG-SEUSA’ symposium held this past December of 2017 at Columbia, SC.\n\n\nSupplementary material\n\nSupplementary File 1: Course syllabus and teaching materials\n\nClick here to access the data\n\n\nReferences\n\nSearls DB: The roots of bioinformatics. PLoS Comput Biol. 2010; 6(6): e1000809. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarone L, Williams J, Micklos D: Unmet needs for analyzing biological big data: A survey of 704 NSF principal Investigators. bioRxiv. 2017; 108555. Publisher Full Text\n\nMadlung A: Assessing an effective undergraduate module teaching applied bioinformatics to biology students. PLoS Comput Biol. 2018; 14(1): e1005872. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDinsdale E, Elgin SC, Grandgenett N, et al.: NIBLSE: A Network for Integrating Bioinformatics into Life Sciences Education. CBE Life Sci Educ. 2015; 14(4): Ie3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVia A, Blicher T, Bongcam-Rudloff E, et al.: Best practices in bioinformatics training for life scientists. Brief Bioinform. 2013; 14(5): 528–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCresiski RH: Undergraduate bioinformatics workshops provide perceived skills. J Microbiol Biol Educ. 2014; 15(2): 292–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBanta LM, Crespi EJ, Nehm RH, et al.: Integrating genomics research throughout the undergraduate curriculum: a collection of inquiry-based genomics lab modules. CBE Life Sci Educ. 2012; 11(3): 203–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAttwood TK, Blackford S, Brazas MD, et al.: A global perspective on evolving bioinformatics and data science training needs. Brief Bioinform. 2017; bbx100. PubMed Abstract | Publisher Full Text\n\nEmery LR, Morgan SL: The application of project-based learning in bioinformatics training. PLoS Comput Biol. 2017; 13(8): e1005620. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLuo J: Teaching the ABCs of bioinformatics: a brief introduction to the Applied Bioinformatics Course. Brief Bioinform. 2014; 15(6): 1004–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltmäe S, Esteban FJ, Stavreus-Evers A, et al.: Guidelines for the design, analysis and interpretation of 'omics' data: focus on human endometrium. Hum Reprod Update. 2014; 20(1): 12–28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoekel J, Chilton JM, Cooke IR, et al.: Multi-omic data analysis using Galaxy. Nat Biotechnol. 2015; 33(2): 137–9. PubMed Abstract | Publisher Full Text\n\nMulder N, Schwartz R, Brazas MD, et al.: The development and application of bioinformatics core competencies to improve bioinformatics training and education. PLoS Comput Biol. 2018; 14(2): e1005772. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWelch L, Lewitter F, Schwartz R, et al.: Bioinformatics curriculum guidelines: toward a definition of core competencies. PLoS Comput Biol. 2014; 10(3): e1003496. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarver T, Harris SR, Berriman M, et al.: Artemis: an integrated platform for visualization and analysis of high-throughput sequence-based experimental data. Bioinformatics. 2012; 28(4): 464–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDarling AE, Tritt A, Eisen JA, et al.: Mauve assembly metrics. Bioinformatics. 2011; 27(19): 2756–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeyer F, Paarmann D, D'Souza M, et al.: The metagenomics RAST server - a public resource for the automatic phylogenetic and functional analysis of metagenomes. BMC Bioinformatics. 2008; 9(1): 386. PubMed Abstract | Publisher Full Text | Free Full Text\n\nParks DH, Beiko RG: Identifying biologically relevant differences between metagenomic communities. Bioinformatics. 2010; 26(6): 715–721. PubMed Abstract | Publisher Full Text\n\nBrooks AN, Yang L, Duff MO, et al.: Conservation of an RNA regulatory map between Drosophila and mammals. Genome Res. 2011; 21(2): 193–202. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria. 2014. Reference Source\n\nGoldman M, Craft B, Swatloski T, et al.: The UCSC Cancer Genomics Browser: update 2015. Nucleic Acids Res. 2015; 43(Database issue): D812–817. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Genome Atlas Research Network, Weinstein JN, Collisson EA, et al.: The Cancer Genome Atlas Pan-Cancer analysis project. Nat Genet. 2013; 45(10): 1113–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSubramanian A, Tamayo P, Mootha VK, et al.: Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005; 102(43): 15545–50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBanister CE, Liu C, Pirisi L, et al.: Identification and characterization of HPV-independent cervical cancers. Oncotarget. 2017; 8(8): 13375–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaarunya G, Ely B: Dataset 1 in: Design and implementation of semester long project and problem based bioinformatics course. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16310.d218863\n\nSaarunya G, Ely B: Dataset 2 in: Design and implementation of semester long project and problem based bioinformatics course. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16310.d218864"
}
|
[
{
"id": "39760",
"date": "02 Nov 2018",
"name": "Russell Schwartz",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSaarunya and Ely describe a problem-based bioinformatics course designed to meet a need for “next generation data scientists” in the life sciences, a need identified by many current efforts in life sciences education. Case studies of course development efforts like this can be valuable to those seeking to develop similar courses or incorporate those courses into their curricula and looking for ideas or for pitfalls to avoid. The authors do a good service for the field in putting out their efforts and lessons learned in a form from which other educators can benefit. The specific effort here is a nice example of a small project-focused course serving a cohort with some diversity of backgrounds and immediate training needs. While it presents just one small example, that description might reasonably apply to courses many training programs are developing or would like to develop. In addition to the article itself, the supplementary material includes a full syllabus, lecture slides, assignments, and some supplementary materials, increasing its value to others looking to develop course materials in this space.\n\nThe authors make a good case for the need for new courses along these lines. They back that need up well with appropriate citations to the relevant literature on life sciences and bioinformatics education. The manuscript provides a good background on prior efforts to characterize the need for bioinformatics training, identify the specific skills required by future life scientists, and how those skills are or are not being provided in practice. The authors further give reasonable consideration to challenges to the design of bioinformatics curricula that they expected to confront in this effort. On the latter point, they might also refer to Williams et al. (20171), which identified a number of other recurring challenges to bioinformatics education in the life sciences. Others in the field might appreciate the perspective of these authors on whether any of the challenges Williams et al. identified were encountered in their effort and, if so, how they were overcome.\n\nThe course itself covers a nice range of topics in applied bioinformatics, which might be expected to meet the needs of a diverse set of likely users. The course materials provided in the supplement might therefore find a good audience. One general concern, though, is that the supplementary materials contain some third-party resources, for which it might be more appropriate to include a reference or link rather than the material itself. The teaching approach is fairly applied, with a lot of focus on specific data resources and software, although with some attention to principles behind these resources. While some user communities might favor an approach more grounded in the principles and theory, the focus here seems typical of many bioinformatics courses aimed primarily at biology students. The authors might do a bit more to justify the balance of focus on practice versus theory, with reference to efforts at identifying specific bioinformatics competencies needed by their likely user community, several of which the paper cites.\n\nThe Results present some interesting material in the form of a pre-class survey and post-class course evaluation material. While the cohort here is a single small sample, some useful lessons can be drawn about the diversity of backgrounds and needs of even a small group like this. The paper would be considerably stronger with some more serious assessment of whether the learning objectives of the course were met. That is a non-trivial undertaking and cannot be done retroactively, but might be worth considering for a future iteration of the class if it is being continued. The materials do include results of a university-run course evaluation, which provide some indication of how students felt about the course, although that is different from showing how successfully they learned the material. This post-class evaluation makes for some interesting reading, although if it is being included with the paper, it might bear some comment in the Results and Discussion.\n\nIt would be useful also to see some comparison to other similar course material available in publicly accessible forms. While that is a difficult moving target, comparing to a few alternatives from prominent course repositories or MOOCs, particularly to highlight the unusual or especially innovative features of this course, would be valuable.\n\nThe paper does a nice job of presenting some lessons learned in the Discussion. It is commendable that the authors spend some time on what did not work so well in this class and consider how it might be done differently in the future. One would ideally like to see this taken further via a more comprehensive formative assessment process – with problems identified via a formal assessment, solutions proposed, and those solutions demonstrated to be effective in a re-assessment. It is understandable that that may be beyond the scope of a one-off paper like this, though, and it is nonetheless easy to see how others developing a class in this domain might benefit from the advice given here to avoid some of the same pitfalls.\n\nBeyond these more specific technical points, the document is clear and generally well-written. I noted just a couple of minor errors:\np. 4: ``International society of Computational biology’’ should be ``International Society for Computational Biology’’. p. 4: ``Regional student group – Southeast USA’’ should be ``Regional Student Group – Southeast USA’’.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "4256",
"date": "26 Nov 2018",
"name": "Geetha Saarunya",
"role": "Author Response",
"response": "The authors would like to thank Dr. Schwartz for his in-depth and insightful feedback on the paper. Following are the comments from the authors, which will be incorporated into the final version of the paper after the second and third referees' feedback: The authors recognize the contributions made by 'Williams et al.*' in identifying the challenges of introducing bioinformatics to life-science students. These issues are already addressed in the paper in the following ways:(i) Faculty issues (training): The authors’ training and background gave them an opportunity to design a multi-project/problem based course. But the post-module projects/problem sets were based on the background of the students. And this was possible because of the small class size.(ii) Faculty issue (time): This course was designed with inputs from the students based on their needs and training. Hence a lot of time was spent on the course design followed by making changes/adjustments to the course during the implementation.(iii) Student issue (Background skills): The authors addressed the gaps in student's computational and statistical training by offering additional learning modules. The authors have also addressed the problems faced by the students and ways to tackle them in the future under ‘Discussion’ section.(iv) Student issue (Interest): As an applied Bioinformatics course, the students had an opportunity to apply their learning to solve problems and projects in their area of interest/background. Active engagement and participation of the students was encouraged throughout the course by timely submission of projects and problem sets. 2. The authors recognize the need to have a better competency assessment of the students’ pre- and post-course. In future, this can be accomplished in the form of pre-course problem solving and post-course problem solving to ensure that the students meet the set learning objectives. The course in the current format had the student’s research, design, address and present their learning (with emphasis on critical evaluation and problem solving) in the form of a project presented as a talk/poster in the research symposium held at the end of the semester. To protect the student’s data/projects, the final posters and presentations are not included in this paper. 3. As most of the participants were classified as 'Bioinformatics tool users' the authors chose to focus on applied bioinformatics as opposed to Bioinformatics theory. In order to have a bioinformatics focused theory class designed to address every 'omic' problem, the authors believe that it would be prudent to have just one or two modules together and introduce theory and problem/projects pertaining to the same. 4. The authors have cited the third-party resources in the main paper with reference numbers in the supplementary materials. The authors will add the supplementary references in supplementary section and main references in the main paper. 5. The course design and challenges addressed in this paper are pertaining to the small class size and may not accurately reflect the challenges faced at the level of MOOC learning. But the authors can add references to MOOC courses that offer similar style of training in the background section. *Reference:* Williams J, Drew J, Galindo-Gonzalez S, Robic S, Dinsdale E, Morgan W, Triplett E, Burnette J, Donovan S, Elgin S, Fowlks E, Goodman A, Grandgenett N, Goller C, Hauser C, Jungck J, Newman J, Pearson W, Ryder E, Wilson Sayres M, Sierk M, Smith T, Tosado-Acevedo R, Tapprich W, Tobin T, Toro-Martínez A, Welch L, Wright R, Ebenbach D, McWilliams M, Rosenwald A, Pauley M: Barriers to Integration of Bioinformatics into Undergraduate Life Sciences Education. bioRxiv. 2017"
}
]
},
{
"id": "40513",
"date": "10 Dec 2018",
"name": "Mark A. Pauley",
"expertise": [
"Reviewer Expertise Bioinformatics education",
"bioinformatics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\n“Design and Implementation of Semester Long Project and Problem based Bioinformatics Course” describes a “multi-omics” bioinformatics course at the University of South Carolina intended for advanced undergraduates and graduate students. The course was implemented in Fall 2017; nine students took it. Per the authors, the primary learning objective of the class was to introduce students “to the breadth and depth of the field of Bioinformatics for ‘omics’ data analyses.” The course was divided into seven modules (e.g., “Genome Assembly and Annotation,” “Comparative Genomics”). Each module had an associated graded problem set, and students completed a research project at the end of the course. A three-question, pre-course survey was used to place students into user groups—bioinformatics tool users, bioinformatics data scientists, and bioinformatics engineers.\nThe article has many strengths. The authors make a compelling case for the need for courses like it to prepare students for graduate school and to address the need for specialists in the field, and they do a good job of putting their course in the context of other bioinformatics education efforts. The contents of the course are clearly laid out (Table 3), and the authors provide a large amount of material (syllabus, slide decks, problem sets) developed for the class as a supplementary file—both will be invaluable for others wishing to implement the entire course or parts of it. As how a course could be improved is often more instructive than what went well, their discussion of potential changes in subsequent iterations of the class is very helpful. Finally, the article is clearly written and easy to read.\nThat said, the manuscript has several issues that should be addressed. First, a number of references are potentially mis-cited. For example, References 6 and 7 cite a Global Organization for Bioinformatics Learning (GOBLET) study that showed that basic data stewardship skills are only taught in 25% of education programs. However, neither of these papers mention the GOBLET survey or the 25% statistic. In addition, References 11 and 12 do not deal with bioinformatics courses and Reference 15 does not discuss the competencies of different bioinformatics users as their use would imply. Similarly, I am concerned about the bioinformatics user groups given in Table 1. Specifically, the descriptions of the three groups are very similar to the three personas described in Reference 14, and the name of one (bioinformatics engineer) is the same (the names of the other two are almost the same). In short, it’s not clear if the authors are restating the results of Reference 14 or are proposing a slightly different grouping. Although the posted resources are clearly an important contribution, I found them to be incomplete in one important aspect. In particular, the authors state that every module had a problem set/project associated with it, but this was missing from three of the seven modules. Furthermore, a brief description of the final research projects the students worked on would be helpful as it would indicate what the students were able to do at the end of the semester.\nIn addition to the above, very little is provided in terms of results. One of the results seems to be the placement of students into the three user groups. However, how the results of the pre-course survey were used to place the students into these groups and if and how they impacted the way in which the course was taught is not clear. Similarly, Table 4 and the corresponding description of it in the narrative, particularly the use of the word “expected,” is confusing. Does Column 3 of the table refer to the group a given student was in at the end of the semester or where they were expected to be at some other point in the semester? In any event, how was this determined? Although the course evaluation is helpful in understanding how students felt the course went, I would have liked to have seen more assessment results, particularly if the learning objectives of the course had been met. In general, the paper would be strengthened by the results of another iteration of the course, one in which the proposed changes had been made and the learning gains of the students were assessed.\nAs previously mentioned, the article is well-written. However, I did notice two small errors. The first sentence of “Course design” should probably be “We had nine students register for the course.” Also, “bioinformatics” is incorrectly capitalized in “This course was designed to provide a structured Bioinformatics course. . .”.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "40462",
"date": "17 Dec 2018",
"name": "Allegra Via",
"expertise": [
"Reviewer Expertise Protein structural bioinformatics",
"protein structure and function prediction and analysis",
"and protein interactions. Programming and software development. Science of learning",
"educational psychology",
"cognitive sciences",
"and (bioinformatics) curriculum development."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper describes a semester long bioinformatics course targeting graduate seniors and graduate students who were bench scientists in need for learning how to analyse data generated across different ‘omic technologies’.\n\nI find it weird that “The authors haven’t come across a course that incorporates multi-omics data analyses in a single semester.” If not in a single course, some curricula offer multi-omics data tools and analyses spread in more than one course. A comparison of the presented course with such curricula would be of great interest, as well as a discussion on the convenience of integrating such large amount of bioinformatics materials in a Biological Sciences curriculum. There is much discussion in the field on what is the best strategy to incorporate Bioinformatics in Life Sciences curricula and I wonder whether an overload of different topics, techniques, approaches, methods would be successful in contexts where instructor could not work individually with students.\n\nTable 3 displays a number of features of the course’s modules. However, a well structured program of each module is missing. As for reproducibility, a lesson plan describing how much time was allocated to each classroom activity (lectures, work in group, hands-on, work on individual projects, types and frequency of formative assessments, etc.) would help. Teaching materials provided in the Supplementary materials are not structured at all. Teaching materials are organised in modules, but navigating modules it is very difficult to understand how to use the various files. There is no homogeneity in file names and a “readme” file describing the content of each folder (and how to use it in reproducing the course) is missing. Slides are not annotated. In summary, materials are not reusable in the current form and the course would not be reproducible based on them and on the information provided in the article. The teaching techniques/strategies used in the classroom were not described/discussed, apart from mentioning the importance of the individual work with students. I think the article would benefit from more details on the course design and from the description of the pedagogical approaches the instructors adopted to teach programming and computational skills to bench scientists.\n\nI understand that a key point was the small number of students. Nevertheless, most courses with a small number of students and motivated instructors usually produce successful results. One big challenge is when the number is high. It would be interesting if the authors could reason on how their course could be translated into one for a bigger group of students. What should be definitely changed? Which other strategies could be adopted (peer instruction? Helpers?)?\n\nFinally, the authors use a lot of the term “competency/competencies”. There is currently quite a lot of debate around the convenience of using competencies to describe the outcomes of courses. Indeed, competencies can hardly be assessed and mapped on a learning trajectory. By completing a single course, students may develop knowledge, skills and abilities (KSAs), which are measurable and accessible objects and the development of which can be followed along a learning trajectory, rather than competencies. Could the authors comment on this?\n\nHere are more specific points:\n\np.3 – Re the following sentence: “Practioners of bioinformatics now add techniques from statistics, information science and engineering to develop algorithms and build predictive models to understand the dynamics within a biological system.” In my experience, practitioners of bioinformatics have always added techniques from statistics, information theory and engineering to develop algorithms to predict the functioning of biological systems. The paradigm shift caused by the rapid advances in sequencing technologies is of different kind in my opinion: in the first place, bioinformatics has become the only approach to make sense of the deluge of biological data the authors refer to. Moreover, the storage, management, sharing, annotation, “fairfication” of the enormous amount of data produced, poses important technological challenges and emphasizes the need for new professions.\n\np. 3 – In the sentence: “Practioners of bioinformatics…”, “Practioners” should be changed to “Practitioners”. Please, check the whole manuscript for typos/misspellings. p. 3 – The authors put the sentence: “However, one of the biggest challenges is the heterogeneity of the backgrounds of the course participants” in opposition to the previous one on ISCB competencies (“However,…”). In contrast, I believe that Bioinformatics core competencies listed in Mulder et al. indirectly express the high degree of heterogeneity of backgrounds in bioinformatics. p.3 – Re the sentence: “In fact, there are three different types of user groups that employ bioinformatics in their research”, I would not define Bioinformatics Engineers as bioinformatics users, but rather developers and managers/maintainers of computational tools. p.4, Table 1 – There is another relevant group of bioinformatics practitioners: those who take care of and manage data, bioinformatics resources and their interoperability and develop standards, data quality metrics, ontologies, annotation, etc. The “big data issue” is especially relevant in the “omics” field and, in my opinion, it would be good if the authors could mention this fourth group, even though none of their students did belong to it. p.3, In the sentence: “We sent a three-question survey (Table 2) to all the participants to understand their reasoning for registering in the course.” I suggest that the authors replace “reasoning” with “motivations” or “reasons”. p.3, in the sentence “We also identified the following three course outcomes for the students.” The authors say “course outcomes”. What is a course outcome? I suspect they mean “learning outcomes”. There is quite a lot of confusion in the field around the definition and usage of “learning objectives”, “learning outcomes” and “teaching objectives”. I suggest that the authors replace “course outcomes” with “learning outcomes”. p.3, Re Learning outcomes. The literature provides quite precise rules to write learning outcomes. You can use the sentence “by the end of the course, students will (NOT should) be able to” followed by an “actionable verb”, namely a verb expressing an action or a behaviour that can be (at least in principle) assessed. The verbs used in learning outcomes I (“identify” and “implement”) are of this type, whereas some verbs used in II and III are not (“be comfortable”, “elicit”). Moreover, it is a good practice to write learning outcomes that are as much specific as possible in terms of both the cognitive complexity level they express and their content. For example, in learning outcome I, “identify” and “implement” express two different levels of cognitive complexity and learning outcome II includes a large variety of contents. p.3, Learning outcome II. What do the authors mean by “command line programming”? Do they mean “Linux shell scripting” or “navigating files and directories using the command line shell”? To be able to use R statistical packages implies to be able to do (at least some) R programming. I suggest that the authors specify this. p.4, the footnote of Table 2 is misleading. What does it mean that the authors did not have the information about programming experience in the pre-class survey answers? Did they asked question 1 in the pre-class survey (as stated in the manuscript) or in class (as stated in the footnote)? Were the 7 responses about programming experience? If so, this means that the authors got 2 answers in class and 7 answers in the pre-class survey. Is this correct? Or the pre-labsurvey is another thing? Very confusing. Table 2. Survey questions sent out to the students - As question 1 is about “programming experience”, please notice that “using bioinformatics software” is not “programming”. For consistency with answers to questions 1 and 2, please specify the distribution of answers to question 3. p.4, Re the sentence: “Based on the responses of the students, we assigned potential user groups as explained in Table 1 at the start of the class with their expected competency levels at the end of the class.”, I have three main concerns: 1) I don’t see where competency levels at the end of the class are listed (unless the authors are now calling “competency levels” what they called “characteristics” in Table 1. Should this be the case, in no way can students acquire the characteristics listed in Table 1 by completing the course described in this paper; 2) Competencies are yes/no objects, which means either an individual has a competency or they don’t have it. Therefore, it may be problematic to talk about “competency levels”; it may be perhaps more appropriate to talk about knowledge, skills or abilities (KSAs) levels; 3) If by “class” you mean a series of lectures on a subject, could you specify at the end of which class (a module? The entire course?) you defined “expected competency levels”? As a side note, a single class can possibly increase the level of a KSA, surely not allow students to acquire a competency. p. 4: in this sentence: “Successful completion of the project assigned to every student by the end of a course module determined their competency of the course.” It is not clear what do the authors mean by “competency of the course”. Do they mean that the competency acquired in a module determined students’ competency in the whole course?\n\np. 6: In the sentence: “We determined the competency of a student per module by their successful completion of the problem set and or the project.” what do the authors mean by “successful completion of the problem set and or the project”? There were students who did not successfully complete the project? How did instructors grade them?\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1547
|
https://f1000research.com/articles/6-620/v1
|
03 May 17
|
{
"type": "Research Article",
"title": "Efficacy of an 8-week course of sofosbuvir and ledipasvir for the treatment of HCV infection in selected HIV-infected patients",
"authors": [
"Onyema Ogbuagu",
"Ritche Hao",
"Michael Virata",
"Merceditas S. Villanueva",
"Maricar Malinis",
"Ritche Hao",
"Michael Virata",
"Merceditas S. Villanueva",
"Maricar Malinis"
],
"abstract": "Background: With the availability of direct acting antiviral treatment for hepatitis C (HCV), HIV and HCV co-infected patients show comparable treatment responses to HCV-monoinfected patients. An 8-week course of sofosbuvir/ledipasvir (SOF/LDV) is highly effective for the treatment of HCV genotype 1 infection in treatment-naïve mono-infected patients with HCV viral loads <6 million IU/ml. There is limited data on the efficacy of this 8-week HCV treatment regimen in HIV-infected individuals with similar viral loads. Methods: The study was a retrospective review of HIV-infected adults coinfected with HCV genotype 1 for whom an 8-week course of SOF/LDV was prescribed by providers at two clinics in the Yale-New Haven health system from November 1, 2014 until April 30, 2016. Treatment efficacy was assessed as the proportion of treatment initiators who achieved a sustained virologic response 12 weeks after completion of therapy (SVR 12). Results: Nineteen patients met study eligibility criteria and included 14 men (74%); and 12 African-Americans (63%). All patients were on antiretroviral therapy with fully suppressed HIV viral loads and were HCV treatment-naïve. All patients had pre-treatment HCV viral loads <6 million IU/mL. Eighteen patients (95%) completed HCV treatment. Overall, SVR 12 was 95%, with 1 treament failure occurring due to suboptimal adherence. Conclusion: Among our HIV-infected patient cohort with HCV genotype 1 infection, 95% of those treated with an 8 week course of SOF/LDV achieved SVR 12. This is comparable to the efficacy of the same treatment regimen in patients without HIV infection. This study lends proof of concept to the use of shorter course SOF/LDV treatment for HIV-HCV genotype 1 coinfected patients with viral loads <6 million IU/ml. Larger studies are indicated to validate our findings.",
"keywords": [
"Hepatitis C genotype 1",
"Direct-acting antivirals",
"HIV",
"short-course therapy"
],
"content": "Introduction\n\nHIV and hepatitis C virus (HCV) share similar epidemiologic risk factors and routes of transmission, such that among HIV infected individuals, the prevalence of HCV infection is high and estimated at 25% (https://www.cdc.gov/hepatitis/populations/hiv.htm). Among certain risk groups, such as injection drug users with HIV infection, prevalence rates as high as 90% have been reported (https://www.cdc.gov/hepatitis/populations/hiv.htm). HIV infection alters the natural history of HCV disease, such that there are higher and faster rates of progression to liver cirrhosis with its resultant complications; this negative interaction may not be impacted by the receipt of effective antiretroviral therapy (ART)1,2. Owing to this, the treatment of HCV infection is prioritized for persons infected with HIV.\n\nSince the sequencing of the HCV genome, there has been an explosion in the number of drugs approved for the treatment of HCV infection. The direct acting antiviral treatment regimens now allow for fully orally administered, well tolerated, and highly effective treatments for HCV infection. However, multiple studies have shown that of the individuals eligible for HCV treatment, few have received treatment3,4. A primary obstacle to the treatment of HCV infection is the exorbitant cost of the treatment regimens5,6. Therefore, cost saving measures including shortened duration of therapy are of interest to patients and their providers7. Emerging data suggests that 8-week rather than 12-week regimens may be effective for HCV treatment among selected patients8,9.\n\nThe United States Food and Drug Administration (FDA) approved sofosbuvir/ledipasvir (SOF/LDV) in 2014 for the treatment of chronic HCV genotype 1 infection. The ION-3 study of SOF/LDV that included treatment-naïve non-cirrhotic patients with HCV genotype 1, found that the sustained virologic response 12 weeks after end of therapy (SVR12) was comparable between the 8-week (with and without ribavirin) and 12-week treatment arms in a post hoc analysis for patients who had a pre-treatment HCV viral load (VL) <6 million IU/ml10. Based on this data, the American Association for the Study of Liver Diseases/Infectious Diseases Society of America guidelines suggest considering an 8 week course for treatment-naïve genotype 1 non-cirrhotic patients with a pre-treatment HCV VL <6 million IU/ml (http://www.hcvguidelines.org/full-report/hcv-testing-and-linkage-care). However, the guidelines cite limited data as the reason to not recommend an 8-week SOF/LDV treatment course for HIV-infected patients.\n\nContemporary HCV treatment trials with directly acting antiviral (DAA) agents have shown that HIV infection status no longer independently impacts treatment outcomes11,12. Therefore, shorter HCV treatment regimens are likely to be as effective in HIV-infected individuals as their non-infected counterparts. Our study describes treatment outcomes of a short (8 week) course of SOF/LDV in HIV/HCV co-infected patients.\n\n\nMethods\n\nWe performed a retrospective review of all HIV and HCV co-infected patients, for whom an 8-week SOF/LDV treatment course was initiated from November 1, 2014 until April 30, 2016. The treatment decision for short course therapy was made by individual clinic providers at two clinics based at Yale New Haven Hospital in New Haven, CT, USA: Nathan Smith Clinic and the Haelen Center.\n\nEligibility criteria for the study included all adult (age >18 years) patients with confirmed HIV infection, who had HCV genotype 1 infection, and without co-infection with non-genotype 1 HCV. Only individuals for whom treatment with an 8-week course of SOF/LDV was intended, were included in the analysis.\n\nElectronic medical records of eligible patients were reviewed. Data collected included demographic data, HIV clinical data (CD4 count, HIV VL, ART), and laboratory data, including complete blood counts, electrolytes, and liver biopsy results. Plasma HCV viral loads and genotypes were determined at our lab using COBAS Ampliprep/COBAS Taqman HCV Test, v2.0 (Roche Diagnostics, Indianapolis, IN, USA). Assessment of liver fibrosis stage at time of treatment initiation was determined by one or more of the following: liver biopsy and non-invasive liver fibrosis scores, such as AST to platelet ratio index (APRI)13 and fibrosis-4 (FIB-4) score14. Patient-reported adverse events and reasons for non-completion or discontinuation of treatment were based on documentation in electronic medical records. Data were recorded and analyzed using descriptive statistics in Microsoft Excel, v2013. Overall SVR 12 rate was defined as the proportion of individuals for whom an 8 week treatment course was initiated that had undetectable HCV viral loads 12 weeks after completion of therapy.\n\nStudy approval was obtained from the Yale University Human Investigations Committee (number 1506016104).\n\n\nResults\n\nA total of 19 patients met the study inclusion criteria. Median age was 53 years (IQR 42-73 years); 14 (74%) were males, and 12 (63%) were African-American. The median body mass index was 28.2 kg/m2. The majority (95%) had glomerular filtration rate >60 ml/min. The major risk factor for HIV was injection drug use (53%). Median CD4 T cell count was 678 cells/µL (IQR 458-1004 cells/µL). All patients were on ART, of which non-nucleoside reverse transcriptase inhibitors (43%) followed by integrase strand transferase inhibitors (32%) were most common. Thirteen patients (68%) were on tenofovir/emtricitabine (FTC) and 5 (26%) were taking abacavir/lamivudine (3TC). Patients who were on HIV protease inhibitors were receiving tenofovir/FTC. All patients had fully suppressed HIV VLs (Table 1).\n\nALT, alanine transaminase; APRI, AST to platelet ratio index; ART, antiretroviral; AST, aspartate transaminase; BMI, basal metabolic index; FIB-4, fibrosis 4; HCV, hepatitis C virus; HIV, human immunodeficiency virus; IDU, injection drug use; IQR, interquartile range; IU/ml, international units/milliliter; LFT, liver function test; MSM, man who has sex with men; NRTI, nucleoside(tide) reverse transcriptase inhibitor, NNRTI, non-nucleoside reverse transcriptase inhibitor; PI, protease inhibitor; INSTI, integrase strand transferase inhibitor; U/L, units/litre; FTC, emtricitabine; 3TC, lamivudine.\n\nTwelve (63%) patients had HCV genotype 1a and 5 (26% ) had genotype 1b; in 2 patients, genotype 1 subtype was not done. Median AST and ALT values were 39 (IQR 31-63) units/L and 45 (IQR 32-70) units/L, respectively. All patients had baseline HCV VLs of < 6 million IU/mL and were HCV treatment-naïve. Based on APRI and FIB-4 score, two patients had cirrhosis, but were clinically compensated.\n\nEighteen patients (95%) completed 8 weeks of therapy. One patient was non-adherent due to active substance abuse and only completed the first 4 weeks of treatment. Adverse events while on treatment were reported by 6 patients as follows: diarrhea (n=1), abdominal pain (n=1), nausea (n=1), poor appetite (n=1), diffuse joint pains (n=1), and pruritus without rash (n=1). One patient who experienced fatigue due to influenza, temporarily discontinued treatment for 7 days, but resumed treatment afterwards. There were no cases of renal insufficiency, including patients who were on HIV protease inhibitors and tenofovir/FTC. Overall, SOF/LDV was well tolerated with no treatment discontinuations due to adverse effects.\n\nAll eligible patients had at least one HCV VL assay performed either at week 4 or week 8 of treatment and at 12 weeks following completion of therapy. At week 4 of treatment, 11 of 12 patients for whom there was available data, had undetectable HCV VLs; one patient had viremia that was less than the lower limit of detection of the assay (< 15 IU/ ml). At week 8 of treatment, 11 of 12 patients, who had available HCV VLs, had undetectable HCV VLs. The patient who had detectable HCV VL at week 8 had completed only 4 weeks of therapy and was subsequently non-adherent due to active substance use. In total 18 of the 19 patients achieved SVR 12. Therefore, overall SVR 12 rate was 95% (Table 2). The two patients who had cirrhosis also achieved SVR 12.\n\nHIV, human immunodeficiency virus; HCV, hepatitis C virus; SVR 12, sustained virologic response 12 weeks after completion of therapy; VL, viral load.\n\n1Data only available for 12 patients; 2Data only available for 12 patients (1 patient only completed 4 weeks of treatment); 3Data obtained from 19 patients 12 weeks after completion of treatment.\n\n\nDiscussion\n\nHIV and HCV infections are often referred to as syndemics as they share similar routes of transmission and impact populations that have similar demographic and socio-economic profiles15. The prevalence of HCV infection ranges from 2.4% in the general HIV-infected population to 82.4% among HIV infected persons who inject drugs15.\n\nThe implications of HCV co-infection are significant. Hepatitis C contributes to increased liver-related morbidity, including complications of end-stage liver disease, such as hepatocellular carcinoma, as well as mortality16. The presence of HIV infection confers a risk for accelerated progression of liver disease, even when HIV is virally suppressed17. For these reasons, the treatment of HCV should be prioritized for HIV-infected persons.\n\nMultiple studies have shown that HIV co-infection is no longer a significant predictor of poor HCV treatment outcomes, such that cure rates among individuals infected with HIV are similar to those who are uninfected11. Similarly, factors that predict poorer response to DAAs, including the presence of cirrhosis, HCV genotype, resistance-associated variants, interleukin-28B polymorphisms, and prior treatment experience, apply to HIV-infected and uninfected individuals alike18.\n\nThere is interest in shorter HCV treatment durations for a number of reasons: the prohibitive cost of newer DAAs19, and issues of adherence and potential development of resistance or toxicity. Kowdley et al showed in a post hoc analysis that an 8-week treatment regimen of SOF/LDV resulted in a high SVR 12 rate among non-cirrhotic HCV infected individuals with genotype 1 infection that was non-inferior to an 8-week regimen with ribavirin or a 12-week regimen without ribavirin10. Lower relapse rates were observed among patients receiving 8 weeks of SOF/LDV who had baseline HCV RNA levels <6 million IU/ml (2%; 2 of 123). However, this study did not include HIV-infected individuals.\n\nIn a subsequent real-world multi-national retrospective study of 634 patients, an 8-week course of SOF/LDV resulted in an overall SVR 12 of 98.1% in non-cirrhotic treatment-naïve individuals regardless of HCV VLs. This study included 16 HIV-infected individuals, and for those with VL >6 million IU/ml, 100% achieved SVR 129. Unlike the previous study, this study found pre-treatment HCV VL >6 million IU/ml in a subset of patients with HIV infection that did not affect treatment outcomes, including relapse rates.\n\nOur study, showing an SVR 12 of 95%, is similar to the rates observed in a German cohort of 28 HIV-HCV co-infected patients, who showed a 96% response rate to 8 week therapy using SOF/LDV (GECCO-01 study)8. All patients in the trial were on antiretroviral therapy with a median CD4 count of 604 cells/mm3. However, the cohort consisted of predominantly Caucasian and male patients (89%). Our patient demographic was different with more women (26% versus 11%), and comprised a majority of African-American patients (63%).\n\nIt is important to highlight that not all DAA-based 8 week treatment courses for HIV-infected patients have yielded satisfactory results. The phase 3 ALLY-2 study, explored 8-week and 12-week SOF/daclatasvir treatment courses in HIV-infected individuals with HCV genotypes 1-420. For treatment-naïve patients with HCV genotype 1, SVR 12 was 96% in the 12-week arm and 76% in the 8-week arm. However, it was observed that patients with HCV VL <2 million IU/ml performed better than those with viral loads >2 million IU/ml (SVR 12 of 100% versus 62%) supporting excellent efficacy with lower viral loads20.\n\nOur 95% SVR 12 rate in individuals placed on short course treatment may be attributable to certain factors: excellent adherence (supported by well controlled HIV infection) and selection of individuals with low HCV viral loads, factors that are associated with higher likelihood of cure18,21. It is remarkable that 26% of subjects were women and almost two-thirds were African-American; two groups that are often under-represented in HCV treatment studies, and this thereby increases the generalizability of the study results. The two individuals with cirrhosis also achieved excellent treatment results. In spite of the small number of patients in our study, the concordance of our findings with the European cohort in the GECCO-01 trial, as well as the multi-center study reported by Kowdley et al, lends support to its validity.\n\nA limitation of our study is that it was retrospective, therefore data captured was dependent on the quality of documentation by patient providers. Our patient demographic may not be representative of patients in settings different from ours. There may be a treatment selection bias, whereby patients who were more likely to adhere to therapy and had characteristics favorable to achieving an optimal response were initiated on therapy by their clinic providers. Due to the low number of patients with cirrhosis in our cohort, it is not advisable to extend the conclusions to this subgroup.\n\nIn summary, our study provides support for the use of an 8 week course of SOF/LDV as an effective treatment option for HIV and genotype 1 HCV co-infected individuals with HCV viral loads <6 million IU/ml.\n\n\nData availability\n\nDataset 1: Spreadsheet data showing baseline demographic and clinical characteristics, as well as treatment outcomes, of HIV-HCV patients treated with 8-week course of sofosbuvir/ledipasvir. doi, 10.5256/f1000research.11397.d15956122\n\n\nEthical statement\n\nThis medical review was approved by Yale University Human Investigations Committee (number 1506016104); individual patient consent was not required in this retrospective chart review.",
"appendix": "Author contributions\n\n\n\nOO and MM conceived of the project; OO, RH and MM collected study data; OO, RH, MV, MSV and MM participated in data analysis, drafting and revision of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nWeber R, Sabin CA, Friis-Moller N, et al.: Liver-related deaths in persons infected with the human immunodeficiency virus: the D:A:D study. Arch Intern Med. 2006; 166(15): 1632–1641. PubMed Abstract | Publisher Full Text\n\nKitahata MM, Gange SJ, Abraham AG, et al.: Effect of early versus deferred antiretroviral therapy for HIV on survival. N Engl J Med. 2009; 360(18): 1815–1826. PubMed Abstract | Publisher Full Text | Free Full Text\n\nButt AA, McGinnis K, Skanderson M, et al.: A comparison of treatment eligibility for hepatitis C virus in HCV-monoinfected versus HCV/HIV-coinfected persons in electronically retrieved cohort of HCV-infected veterans. AIDS Res Hum Retroviruses. 2011; 27(9): 973–979. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCachay ER, Hill L, Wyles D, et al.: The Hepatitis C Cascade of Care among HIV Infected Patients: A Call to Address Ongoing Barriers to Care. PLoS One. 2014; 9(7): e102883. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoon AM, Green PK, Berry K, et al.: Transformation of hepatitis C antiviral treatment in a national healthcare system following the introduction of direct antiviral agents. Aliment Pharmacol Ther. 2017; 45(9): 1201–1212. PubMed Abstract | Publisher Full Text\n\nKonerman MA, Lok AS: Hepatitis C Treatment and Barriers to Eradication. Clin Transl Gastroenterol. 2016; 7(9): e193. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGross C, Akoth E, Price A, et al.: HIV/HCV Co-infection: Overcoming Barriers to Treatment. J Assoc Nurses AIDS Care. 2016; 27(4): 524–529. PubMed Abstract | Publisher Full Text\n\nIngiliz P, Christensen S, Kimhofer T, et al.: Sofosbuvir and Ledipasvir for 8 Weeks for the Treatment of Chronic Hepatitis C Virus (HCV) Infection in HCV-Monoinfected and HIV-HCV-Coinfected Individuals: Results From the German Hepatitis C Cohort (GECCO-01). Clin Infect Dis. 2016; 63(10): 1320–1324. PubMed Abstract | Publisher Full Text\n\nKowdley KV, Sundaram V, Jeon CY, et al.: Eight weeks of Ledipasvir/Sofosbuvir is effective for selected patients with genotype 1 Hepatitis C virus infection. Hepatology. 2017; 65(4): 1094–1103. PubMed Abstract | Publisher Full Text\n\nKowdley KV, Gordon SC, Reddy KR, et al.: Ledipasvir and sofosbuvir for 8 or 12 weeks for chronic HCV without cirrhosis. N Engl J Med. 2014; 370(20): 1879–1888. PubMed Abstract | Publisher Full Text\n\nShafran SD: HIV Coinfected Have Similar SVR Rates as HCV Monoinfected With DAAs: It's Time to End Segregation and Integrate HIV Patients Into HCV Trials. Clin Infect Dis. 2015; 61(7): 1127–1134. PubMed Abstract | Publisher Full Text\n\nMilazzo L, Lai A, Calvi E, et al.: Direct-acting antivirals in hepatitis C virus (HCV)-infected and HCV/HIV-coinfected patients: real-life safety and efficacy. HIV Med. 2017; 18(4): 284–291. PubMed Abstract | Publisher Full Text\n\nLin ZH, Xin YN, Dong QJ, et al.: Performance of the aspartate aminotransferase-to-platelet ratio index for the staging of hepatitis C-related fibrosis: an updated meta-analysis. Hepatology. 2011; 53(3): 726–736. PubMed Abstract | Publisher Full Text\n\nSterling RK, Lissen E, Clumeck N, et al.: Development of a simple noninvasive index to predict significant fibrosis in patients with HIV/HCV coinfection. Hepatology. 2006; 43(6): 1317–1325. PubMed Abstract | Publisher Full Text\n\nPlatt L, Easterbrook P, Gower E, et al.: Prevalence and burden of HCV co-infection in people living with HIV: a global systematic review and meta-analysis. Lancet Infect Dis. 2016; 16(7): 797–808. PubMed Abstract | Publisher Full Text\n\nKlein MB, Rollet KC, Saeed S, et al.: HIV and hepatitis C virus coinfection in Canada: challenges and opportunities for reducing preventable morbidity and mortality. HIV Med. 2013; 14(1): 10–20. PubMed Abstract | Publisher Full Text\n\nLo Re V 3rd, Kallan MJ, Tate JP, et al.: Hepatic decompensation in antiretroviral-treated patients co-infected with HIV and hepatitis C virus compared with hepatitis C virus-monoinfected patients: a cohort study. Ann Intern Med. 2014; 160(6): 369–379. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCavalcante LN, Lyra AC: Predictive factors associated with hepatitis C antiviral therapy response. World J Hepatol. 2015; 7(12): 1617–1631. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChhatwal J, He T, Lopez-Olivo MA: Systematic Review of Modelling Approaches for the Cost Effectiveness of Hepatitis C Treatment with Direct-Acting Antivirals. Pharmacoeconomics. 2016; 34(6): 551–67. PubMed Abstract | Publisher Full Text\n\nWyles DL, Ruane PJ, Sulkowski MS, et al.: Daclatasvir plus Sofosbuvir for HCV in Patients Coinfected with HIV-1. N Engl J Med. 2015; 373(8): 714–725. PubMed Abstract | Publisher Full Text\n\nLouie V, Latt NL, Gharibian D, et al.: Real-World Experiences With a Direct-Acting Antiviral Agent for Patients With Hepatitis C Virus Infection. Perm J. 2017; 21: 16–096. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOgbuagu O, Hao R, Virata M, et al.: Dataset 1 in: Efficacy of an 8-week course of sofosbuvir and ledipasvir for the treatment of HCV infection in selected HIV-infected patients. F1000Research. 2017. Data Source"
}
|
[
{
"id": "22470",
"date": "05 Jun 2017",
"name": "Patrick Ingiliz",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOgbuago and coworkers provide a small study on 19 HIV-HCV coinfected individuals that were treated with an 8-weeks course of sofosbuvir and ledipasvir. Overall, the SVR rate is 95% with only one individual not responding who was in-adherent. The study, although small, adds knowledge to the existing literature.\nMinor comments:\nThe authors should point out more clearly that they are dealing with a difficult-to-treat population here: A high percentage of AAs, high levels of IDU, and high BMI. This values the results even more.\n\nIn the Introduction the authors should point out that DAAs have changed treatment paradigms in HCV, but that 12 weeks treatment duration still proved to be the threshold hard to beat. It only worked so far with this regimen presented here. It will however change with new compounds.\n\nIn the discussion, the description of the ION-3 trial is slightly inaccurate: The non-inferiority of the 8-weeks regimen was an endpoint of the study. The 6 mil viral load threshold was a post-hoc-analysis.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2819",
"date": "21 Sep 2018",
"name": "Merceditas Villanueva",
"role": "Author Response",
"response": "In response to Dr Ingiliz's comments: It is correct that our cohort consisting of predominantly AAs, current or ex-IDUs, with high median BMI, and who are HIV co-infected are traditionally hard to treat populations so that it does make the treatment results all the more remarkable and in spite of the small sample size. This is now reflected in the discussion as follows:\"The high SVR12 rate in our study is even more remarkable given that all patients were HIV infected and a significant proportion were African-American, had a high BMI and were active or current IDUs, all of which are characteristics of traditionally hard to treat populations. \" We have modified the sentence in the Introduction referencing 8-week and 12-week treatment regimens to suggest as accurately pointed out that for currently approved DAA regimens, 12-week treatment duration remains the standard for most patients, while 8 week regimens may be used for \"selected cases\". This is now reflected in the introduction as follows:\"While 12-week DAA-based treatment regimens remain the standard treatment course for most HCV infected patients, emerging data suggests that 8-week rather than 12-week regimens may be effective for treatment among selected patients \" The paragraph has been rephrased to accurately reflect the primary results of the open label randomized ION-3 study as well as post hoc analysis as follows:\"Kowdley et al, in a phase 3 open label randomized trial, showed that an 8-week treatment regimen of SOF/LDV resulted in a high SVR 12 rate among non-cirrhotic HCV infected individuals with genotype 1 infection that was non-inferior to an 8-week regimen with ribavirin or a 12-week regimen without ribavirin 10 . In a post hoc analysis, lower relapse rates were observed among patients receiving 8 weeks of SOF/LDV who had baseline HCV RNA levels <6 million IU/ml (2%; 2 of 123).\""
}
]
},
{
"id": "24844",
"date": "30 Aug 2018",
"name": "David K Wong",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral: This is a small retrospective study that adds to the evidence that those with HCV (genotype 1)-HIV co-infection and low viral load can be treated successfully with 8 weeks of Sofosbuvir/Ledipasvir. Adherence matters as the one treatment failure did not complete treatment. The data from this cohort (two patients not described) are not strong enough to recommend this strategy for those with established cirrhosis. On-treatment monitoring of HCV PCR adds little to treatment.\nSpecific comments: 1. The introduction is a bit dated. We should no longer need to justify treatment of those with HIV co-infection as a priority population. The simple fact of HCV infection means that these individuals should be offered treatment.\n2. Introduction points out that HIV infection status no longer independently impacts treatment outcomes. The introduction should ALSO point out that those with HCV without HIV, NO cirrhosis and low viral load can be successfully treated with 8 weeks of SOF/LDV.\n3. Two patients were thought to be cirrhotic but clinically compensated - how compensated? They should be described further and they should be pointed out in Table 3 - what were platelet counts, INR, albumin, Bilirubin.\n4. The discussion is repetitive - should not repeat what was said in introduction\n5. Study presents data of on-treatment HCV PCR monitoring. Do the authors think that this is required?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3975",
"date": "21 Sep 2018",
"name": "Merceditas Villanueva",
"role": "Author Response",
"response": "General:This is a small retrospective study that adds to the evidence that those with HCV (genotype 1)-HIV co-infection and low viral load can be treated successfully with 8 weeks of Sofosbuvir/Ledipasvir. Adherence matters as the one treatment failure did not complete treatment. The data from this cohort (two patients not described) are not strong enough to recommend this strategy for those with established cirrhosis. On-treatment monitoring of HCV PCR adds little to treatment.Response- Thanks for the comment. Regarding the concern about recommending short course treatment for cirrhotics, we acknowledge in the limitations section that… “Due to the low number of patients with cirrhosis in our cohort, it is not advisable to extend the conclusions to this subgroup.”Also, we agree that on-treatment monitoring of HCV viral loads adds little to treatment especially as there are no “stoppage” rules based on pre-set viral load decay parameters monitored over time. We captured HCV viral load assessments as checked by clinic providers which was based on the clinic HCV treatment protocol at the time the study was conducted.Specific comments:1. The introduction is a bit dated. We should no longer need to justify treatment of those with HIV co-infection as a priority population. The simple fact of HCV infection means that these individuals should be offered treatment.Response- we agree that there is no longer a need to justify treatment of HIV infected patients with HCV as they experience similar treatment outcomes. We have eliminated the sentence “owing to this the treatment of HCV infection is prioritized for persons infected with HIV” from the revised manuscript2. Introduction points out that HIV infection status no longer independently impacts treatment outcomes. The introduction should ALSO point out that those with HCV without HIV, NO cirrhosis and low viral load can be successfully treated with 8 weeks of SOF/LDV.Response: We have modified the 3rdsentence of paragraph 3 of the introduction as follows: “Based on this data, the American Association for the Study of Liver Diseases/Infectious Diseases Society of America guidelines recommend that treatment-naïve, genotype 1 patients without cirrhosis, are non-black, HIV-negative and with a pre-treatment HCV VL <6 million IU/ml ( http://www.hcvguidelines.org/full-report/hcv-testing-and-linkage-care) can be successfully treated with 8 weeks of SOF/LDV.3. Two patients were thought to be cirrhotic but clinically compensated - how compensated? They should be described further and they should be pointed out in Table 3 - what were platelet counts, INR, albumin, Bilirubin.Response: We noted discrepancies in the dataset attached to our original submission when compared to our original data! (the APRI and Fib-4 scores were erroneously arranged in descending order on the submitted version and not on the same row with appropriate patients). We have corrected this on the re-submission. Both patients had APRI and Fib-4 scores that were above cut-off values that are suggestive of liver cirrhosis. We used the term “compensated” to mean that they had no documented clinical features of decompensation including development of HCC, ascites, porto-systemic encephalopathy or varices. We did calculate Child Pugh scores (inclusive of INR and albumin levels) and both patients were class A. Both of these last points are mentioned in the results section.4. The discussion is repetitive - should not repeat what was said in introductionResponse: Thank you. We have modified the discussion to remove redundant / repetitive statements.5. Study presents data of on-treatment HCV PCR monitoring. Do the authors think that this is required?Response: as stated in our first response to general comments -we agree that on-treatment monitoring of HCV viral loads adds little to treatment especially as there are no “stoppage” rules based on pre-set viral load decay parameters monitored over time. We captured HCV viral load assessments as checked by clinic providers which was based on the clinic HCV treatment protocol at the time the study was conducted"
}
]
}
] | 1
|
https://f1000research.com/articles/6-620
|
https://f1000research.com/articles/7-1514/v1
|
21 Sep 18
|
{
"type": "Research Article",
"title": "Investigating the dynamics of Leishmania antigen in the urine of patients with visceral leishmaniasis: a pilot study",
"authors": [
"Prakash Ghosh",
"Israel Cruz",
"Albert Picado",
"Thomas Edwards",
"Md. Anik Ashfaq Khan",
"Faria Hossain",
"Rajashree Chowdhury",
"Emily R. Adams",
"Rupen Nath",
"Joseph M. Ndung'u",
"Dinesh Mondal",
"Prakash Ghosh",
"Israel Cruz",
"Albert Picado",
"Thomas Edwards",
"Md. Anik Ashfaq Khan",
"Faria Hossain",
"Rajashree Chowdhury",
"Emily R. Adams",
"Rupen Nath"
],
"abstract": "Background: Detection of Leishmania antigens in the urine provides a non-invasive means of diagnosis and treatment monitoring of cases of visceral leishmaniasis (VL). Leishmania antigen load in the urine may vary between different time-points within a day, thus influencing the performance of antigen-detection tests. Methods: We investigated the dynamics of Leishmania antigen in urine collected at three different time points (08:00, 12:00 and 16:00 hours). All urine samples collected were tested with the Leishmania Antigen ELISA (VL ELISA) kit, produced by Kalon Biological Ltd., UK. Results: The median concentration of Leishmania antigen in urine collected at 08:00 (2.7 UAU-urinary antigen units/ml) was higher than at 12:00 (1.7 UAU/ml) and at 16:00 (1.9 UAU/ml). These differences were found to be statistically significant (08:00 vs. 12:00, p=0.011; 08:00 vs. 16:00, p=0.041). Conclusion: This pilot study indicates that the Leishmania antigen concentration is higher in urine samples collected in the morning, which has important implications when the VL ELISA kit or other tests to detect Leishmania antigen in urine are used for diagnosis of VL and treatment monitoring.",
"keywords": [
"Visceral Leishmaniasis",
"Leishmania Antigen",
"ELISA",
"Urine",
"Diagnosis",
"Treatment monitoring"
],
"content": "Introduction\n\nVisceral Leishmaniasis (VL), also known as kala-azar, is a potentially fatal vector-borne disease that, in the Indian subcontinent, is caused by Leishmania donovani protozoa, which are transmitted by female Phlebotomus argentipes sand flies1. The number of cases of VL per year worldwide is estimated to be 0.2–0.4 million, with 20,000 to 40,000 associated deaths. Just six countries, in which the disease is transmitted as part of an anthroponotic (Bangladesh, India), zoonotic (Brazil) or a probable anthropozoonotic cycle (Ethiopia, South Sudan, Sudan), account for 90% of VL cases worldwide2–4.\n\nIn patients presenting with VL-compatible signs, namely fever for more than two weeks plus splenomegaly and/or weight loss. VL is usually diagnosed by serology, either with a direct agglutination test or rK39 antigen-based rapid diagnostic tests. When parasite confirmation is required, the main approach is tissue aspirate microscopy (from spleen, bone marrow and to a lesser extent lymph node), which has a variable sensitivity and, because of the invasiveness of the procedure (especially spleen aspiration), requires experienced personnel and should be performed in hospitals where blood transfusion and surgical facilities are available. Besides, the accuracy of microscopic examination is influenced by the ability of the laboratory technician and the quality of the reagents and equipment used4,5. Parasite confirmation by tissue aspirate microscopy is also used for treatment monitoring, test-of-cure (TOC) and diagnosis of relapses, since serology is useless for this purpose, as anti-Leishmania antibodies may remain detectable up to several years after cure5,6. Initial cure rates vary between 49% and 94%. Therefore, alternative, less invasive options to invasive tissue aspiration and microscopy are needed to monitor treatment responsiveness, diagnose relapses and assess cure. Although molecular methods, such as PCR, have shown to be effective in VL diagnosis and treatment monitoring using less invasive samples; unfortunately, these require sophisticated laboratory and trained personnel, and there are no standardized protocols that can be used across endemic settings, which hinders their application7,8.\n\nAntigen detection tests, ideally in less invasive samples such as blood, serum/plasma or urine, are an interesting option, as antigen levels should reflect the parasite load in the patient. These tests also present an advantage over antibody detection in immunocompromised patients with low antibody response, as in Leishmania/HIV coinfection9. In chronic infections, such as VL, the detection of antigens of the pathogen in blood or serum/plasma can be complicated by the presence of high levels of antibodies, circulating immune complex, serum amyloid, rheumatoid factors, and autoantibodies, all of which may mask immunologically important antigenic determinants or competitively inhibit the binding of antibodies to free antigens10. Nevertheless, Gao et al.11 proved that it was possible to detect Leishmania antigen in the sera of VL patients from China with high sensitivity and specificity. However, many of the problems described above may be avoided by searching for antigens in urine. Several studies have demonstrated Leishmania antigens in the urine of VL patients using different approaches, such as countercurrent immunoelectrophoresis, Western blot, latex agglutination test and ELISA12–16.\n\nFluctuations in the quantity of Leishmania antigens excreted through urine might influence the sensitivity of these assays. According to the Clinical and Laboratory Standards Institute guidelines, and confirmed by other authors, urine collected in the early morning contains urinary components at the highest concentration and is more reliable for quantification of urine markers17,18. However, there is no evidence concerning the persistence and levels of Leishmania antigen in urine collected in the early morning versus other time points. Therefore, given the utility of antigen detection tests in VL diagnosis and treatment monitoring, we set out to study the dynamics of Leishmania antigens in urine in order to determine which time point is the most appropriate to detect Leishmania antigens in VL patients using the Leishmania Antigen ELISA (VL ELISA) kit (Kalon Biological, Ltd., UK). Further, in a recent study we showed that the parasite load in relapse VL is higher than the primary VL cases19. Therefore, we hypothesized that the level of Leishmania antigens in urine might differ in different states of VL. In our current study we compared the Leishmania antigens level in patients with primary VL and relapse VL.\n\n\nMethods\n\nThis study was conducted at the Emerging Infections and Parasitology Laboratory, International Centre for Diarrheal Disease Research, Bangladesh (icddr,b), between 15 March and 30 April 2016. The study population was a convenience sample of 16 patients with VL (seven primary VL, seven relapse VL and two with treatment failure) who were invited to participate in the study while hospitalized at Surya Kanta Kala-azar Research Centre (SKKRC), the only specialized hospital for VL treatment in Bangladesh. Patients were eligible if they had VL. Patients in the study were grouped as type-1 (primary VL) or type-2 (patients presenting with either relapsed disease or treatment failure). The patients were diagnosed according to the national guidelines for VL management in Bangladesh: a patient from a VL-endemic area presenting with fever for more than 2 weeks, splenomegaly and positive by rK39 rapid diagnostic test (here, Kalazar DetectTM, InBios Intl., USA was used). Information on clinical and demographic characteristics of the patients is provided in Table 1.\n\n*mm/h. ESR, erythrocyte sedimentation rate.\n\nUrine samples were collected at 4-hour intervals compatible with routine activities at SKKRC from each of the 16 enrolled patients before initiation of treatment. A total of 50 ml midstream urine was collected in a tube containing 0.1% NaN3 at 8:00, 12:00 and 16:00 hours. Immediately after collection, all samples were stored at -20°C in SKKRC facilities and then transported to icddr,b, maintaining the cold chain. A 2-ml aliquot of urine from each of the subjects and time points was used for this study.\n\nThe Leishmania Antigen ELISA (VL ELISA) (Kalon Biological Ltd., UK) uses a set of polyclonal antibodies against non-proteic Leishmania antigens. As the antigens detected in urine with this kit remain largely uncharacterized, the unit Urinary Antigen Unit (UAU) is used to express the amount of Leishmania antigens detected. ELISA was performed according to the manufacturer’s instructions, described elsewhere16. Briefly, samples were diluted using the assay diluent provided with the kit and a 1:20 dilution was used to determine the antigen concentration. A total of 100 µl diluted urine were tested in triplicate together with duplicates of the antigen calibrators included in the kit using 96-well ELISA plates. After incubation at room temperature optical density (OD) was read at 450 and 620 nm (Biotek, microplate reader). OD at 620 nm was subtracted from OD at 450 nm for further calculations of UAU. A four-parameter logistic standard curve was constructed for each plate using the calibrator provided with the kit. Then Leishmania antigen level in each sample was estimated from the standard curve.\n\nThe difference between antigen concentrations at three different time points of all urine samples was investigated. Based on the distribution of data, a non-parametric test (Wilcoxon matched-pairs signed Rank test) was performed to determine significant differences between medians. To find out any correlation between the antigen concentrations at different time points with participants’ age, Spearman’s test was performed. Mann-Whitney U-test was performed to investigate the difference in the antigen concentrations at different time points within sex and the difference between type-1 and type-2 patients. Statistical analyses were performed using the GraphPad Prism software version 7.03 and SPSS version 20.0.\n\nThis study was approved by the icddr,b Ethical Review Committee, research protocol number PR-14093. Informed written consent was collected from each participant, or the legal guardian in the case of children.\n\n\nResults\n\nThe median concentration of Leishmania antigens was 2.7 UAU/ml, 1.7 UAU/ml and 1.9 UAU/ml in urine samples collected at 08:00, 12:00 and 16:00, respectively (Figure 1). Most of the study subjects (9/16, 56.3%) showed highest urinary Leishmania antigen concentration at 08:00 (Table 2). The five patients presenting the highest antigen concentration at other time points had either identical or similar levels at 08:00. Only two patients (ID, 7 and 16) showed a marked decrease in antigen concentration at 08:00 compared to the 16:00. The median concentration of Leishmania antigens in urine collected at 08:00 was significantly higher than the median concentration of Leishmania antigen in urine collected at 12:00 (p=0.011) and at 16:00 (p=0.041) (Figure 1). However, we did not find significant differences in the Leishmania antigen levels between urine samples collected at 12:00 and 16:00 (p=0.820). Further, the investigation did not find any correlation between the antigen concentrations at different time points with participants’ age and sex (Table 3). In addition, the concentration of antigen in urine of primary VL cases did not differ with the antigen concentration in patients with VL relapse or treatment failure (Table 3).\n\nBold figures indicate highest daily concentration.\n\nVL, visceral leishmaniasis; UAU, urinary antigen unit.\n\nType 1, Primary VL; Type 2, VL relapse or treatment failure.\n\n\nDiscussion\n\nOne of the antigen detection tests most widely used in VL diagnosis is the KAtex latex agglutination test (Kalon Biological, Ltd., UK). Although the first studies showed very promising results, further evaluations proved that this test returns variable sensitivity (36–100%) and specificity (64–99%)20, which has limited its wide use for both diagnosis and treatment.\n\nGiven the potential applications of Leishmania antigen detection tests, the interest in developing new approaches has been sustained and recent efforts have resulted in the development of two standardised, user-friendly, quantitative and direct ELISA tests that may prove to be useful for VL diagnosis and treatment monitoring; the Leishmania Antigen DetectTM ELISA (InBios International Inc., USA) and the Leishmania Antigen ELISA (VL ELISA) (Kalon Biological, Ltd., UK). Moreover, strategies for VL control in the Indian subcontinent (ISC) seems to be working well, currently the main foci to be considered are South America and Eastern Africa, being VL zoonotic in the first and anthropo-zoonotic in the second. Till date, no study has yet been performed to evaluate this test for canine leishmaniasis where it might have the potential to diagnose canine leishmaniasis thereby improving surveillance of reservoirs, proper treatment, monitoring transmission and assessing the efficacy of control activities in endemic areas including South America and the Mediterranean basin where canine leishmaniasis is an important veterinary issue.\n\nAlthough these two ELISAs have the potential to be useful for treatment-monitoring in human VL, they also showed that at the same time point, especially at the day of diagnosis, the parasite load can be very different from patient to patient16. This could be due to the fact that patients are not in the same moment of the VL episode when they seek for diagnosis, or because the samples were taken at different times of the day. In this pilot study we have tried to address the second explanation, and have found that the highest level of Leishmania antigen in urine is obtained with early-morning urine samples. A recent study explored that urine collected in the early morning improves the sensitivity of urinary lateral flow LAM assay for diagnosis of TB in HIV- infected patients, which is congruent with our study finding21.\n\nThe Kala-azar Elimination Programme in the ISC has been conducting diverse activities since 2005, with active case detection being one of the key activities to stop transmission of VL22. However, to eliminate the disease, proper follow-up of treated VL cases and prompt relapse management is no less important, since in the ISC 1–16% of treated VL patients relapse and 10–20% develop post kala-azar dermal leishmaniasis (PKDL)16,23. At present icddr,b, in collaboration with the Liverpool School of Tropical Medicine and the Foundation for Innovative New Diagnostics, is evaluating the efficacy of the Leishmania Antigen ELISA (VL ELISA) kit for diagnosis of VL, PKDL and asymptomatic infection in Bangladesh. Thus it is critical to ensure that the urine samples are taken at a time and in conditions that increase the chances of detecting Leishmania antigens. In this pilot study we have assessed the dynamics of Leishmania antigens in urine from VL patients attending the SKKRC hospital in Bangladesh, and we have found that urine collected at 08:00 contains the highest amount of Leishmania antigens. These findings can be used as a guide to ensure the best performance of the Leishmania Antigen ELISA (VL ELISA) kit when used either for VL diagnosis or treatment monitoring, as well as for implementation of this method in endemic regions in future where this disease is zoonotic. Furthermore, prospective studies are warranted to explore the efficiency of Leishmania Antigen ELISA (VL ELISA) kit as a predictor of VL relapse.\n\n\nConclusion\n\nThe Leishmania antigen load in the urine of VL patients varies at different times during the day, and is highest in the morning. This should be taken into account in order to increase the sensitivity of the Leishmania Antigen ELISA (VL ELISA) kit, and to harmonize sample collection time points during treatment follow-up, so the comparison of the measurements taken on different days can be reliably compared.\n\n\nData availability\n\nDataset 1. Details of patient symptoms, demographic information and results of ELISA for Leishmania antigens. Leishmania antigen load is not found in this Dataset, but can be found in Table 2. https://doi.org/10.5256/f1000research.16181.d21763224.",
"appendix": "Grant information\n\nThis work was supported by funds from the Ministry of Foreign Affairs, Government of the Netherlands (Activity Ref. Nr. 22211, Developing Innovative Diagnostics to Address Poverty-related Diseases; http://www.minbuza.nl).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are grateful to all the study participants. We are also thankful to icddr,b and its core donors: GOB, UKAID, USAID and SIDA.\n\n\nReferences\n\nChowdhury R, Mondal D, Chowdhury V, et al.: How far are we from visceral leishmaniasis elimination in Bangladesh? An assessment of epidemiological surveillance data. PLoS Negl Trop Dis. 2014; 8(8): e3020. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlvar J, Vélez ID, Bern C, et al.: Leishmaniasis worldwide and global estimates of its incidence. PLoS One. 2012; 7(5): e35671. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWorld Health Organization: Leishmaniasis in high-burden countries: an epidemiological update based on data reported in 2014. Wkly Epidemiol Rec. 2016; 91(22): 287–96. PubMed Abstract\n\nWorld Health Organization: Control of the leishmaniases. World Health Organ Tech Rep Ser. 2010; (949): xii-xiii, 1-186, back cover. PubMed Abstract\n\nSundar S, Rai M: Laboratory diagnosis of visceral leishmaniasis. Clin Diagn Lab Immunol. 2002; 9(5): 951–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGidwani K, Picado A, Ostyn B, et al.: Persistence of Leishmania donovani antibodies in past visceral leishmaniasis cases in India. Clin Vaccine Immunol. 2011; 18(2): 346–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurza S, Sinha PK, Mahajan R, et al.: Risk factors for visceral leishmaniasis relapse in immunocompetent patients following treatment with 20 mg/kg liposomal amphotericin B (Ambisome) in Bihar, India. PLoS Negl Trop Dis. 2014; 8(1): e2536, Published online 2014 Jan 2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Ruiter CM, van der Veer C, Leeflang MM, et al.: Molecular tools for diagnosis of visceral leishmaniasis: systematic review and meta-analysis of diagnostic test accuracy. J Clin Microbiol. 2014; 52(9): 3147–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlvar J, Aparicio P, Aseffa A, et al.: The relationship between leishmaniasis and AIDS: the second 10 years. Clin Microbiol Rev. 2008; 21(2): 334–59. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSingh S, Sivakumar R: Recent advances in the diagnosis of leishmaniasis. J Postgrad Med. 2003; 49(1): 55–60. PubMed Abstract | Publisher Full Text\n\nGao CH, Yang YT, Shi F, et al.: Development of an Immunochromatographic Test for Diagnosis of Visceral Leishmaniasis Based on Detection of a Circulating Antigen. PLoS Negl Trop Dis. 2015; 9(6): e0003902. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Colmenares M, Portus M, Riera C, et al.: Short report: detection of 72-75-kD and 123-kD fractions of Leishmania antigen in urine of patients with visceral leishmaniasis. Am J Trop Med Hyg. 1995; 52(5): 427–8. PubMed Abstract | Publisher Full Text\n\nIslam MZ, Itoh M, Shamsuzzaman SM, et al.: Diagnosis of visceral leishmaniasis by enzyme-linked immunosorbent assay using urine samples. Clin Diagn Lab Immunol. 2002; 9(4): 789–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKohanteb J, Ardehali SM, Rezai HR: Detection of Leishmania donovani soluble antigen and antibody in the urine of visceral leishmaniasis patients. Trans R Soc Trop Med Hyg. 1987; 81(4): 578–80. PubMed Abstract | Publisher Full Text\n\nAttar ZJ, Chance ML, el-Safi S, et al.: Latex agglutination test for the detection of urinary antigens in visceral leishmaniasis. Acta Trop. 2001; 78(1): 11–6. PubMed Abstract | Publisher Full Text\n\nVallur AC, Tutterrow YL, Mohamath R, et al.: Development and comparative evaluation of two antigen detection tests for Visceral Leishmaniasis. BMC Infect Dis. 2015; 15(1): 384. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWitte EC, Lambers Heerspink HJ, de Zeeuw D, et al.: First morning voids are more reliable than spot urine samples to assess microalbuminuria. J Am Soc Nephrol. 2009; 20(2): 436–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRabinovitch A, Sarewitz SJ, Woodcock SM, et al.: Urinalysis and collection, transportation, and preservation of urine specimens: approved guideline. Urin Collect Transp Preserv Urin specimens Approv Guidel. 2001.\n\nHossain F, Ghosh P, Khan MAA, et al.: Real-time PCR in detection and quantitation of Leishmania donovani for the diagnosis of Visceral Leishmaniasis patients and the monitoring of their response to treatment. PLoS One. 2017; 12(9): e0185606. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoelaert M, Verdonck K, Menten J, et al.: Rapid tests for the diagnosis of visceral leishmaniasis in patients with suspected disease. Cochrane Database Syst Rev. 2014; (6): CD009135. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGina P, Randall PJ, Muchinga TE, et al.: Early morning urine collection to improve urinary lateral flow LAM assay sensitivity in hospitalised patients with HIV-TB co-infection. BMC Infect Dis. 2017; 17(1): 339. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGhosh P, Bhaskar KR, Hossain F, et al.: Evaluation of diagnostic performance of rK28 ELISA using urine for diagnosis of visceral leishmaniasis. Parasit Vectors. 2016; 9(1): 383. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMondal D, Khan MG: Recent advances in post-kala-azar dermal leishmaniasis. Curr Opin Infect Dis. 2011; 24(5): 418–22. PubMed Abstract | Publisher Full Text\n\nGhosh P, Cruz I, Picado A, et al.: Dataset 1 in: Investigating the dynamics of Leishmania antigen in the urine of patients with visceral leishmaniasis: a pilot study. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16181.d217632"
}
|
[
{
"id": "38601",
"date": "01 Oct 2018",
"name": "Florian Vogt",
"expertise": [
"Reviewer Expertise Research methodology",
"epidemiological and clinical. Infectious diseases outbreak research"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSummary\nThis is a small diagnostic study of a Leishmania antigen ELISA from Bangladesh that explores the variation of antigen load in urine at three different time points over the course of a day.\nGeneral comment:\nThe overall research question and rationale is of limited scope but sufficiently relevant for publication. However, there are a number of substantial issues that need to be addressed in order to make this research and its reporting scientifically sound.\nSpecific comments:\nWhile the authors justify why primary and relapse cases are compared separately, this is not done for age and sex. Unless there are specific reasons to expect clinically relevant differences in outcomes between age and sex for this particular analysis, no formal statistical comparisons with p-values should be done. The authors should either provide a rationale for this subgroup analysis or stick to descriptive comparisons for age and sex. The authors state that “urine collected in the early morning contains urinary components at the highest concentration and is more reliable for quantification of urine markers”. Please refer to the biological rationale behind this phenomenon, and in which other markers this has been observed. If markers in morning urine are considered to be higher than during the day because of urine retention overnight, the factor “time since last urination” becomes important for the 12pm and 4pm measurements in this study. Was this captured? Please add this information if available. In general, the literature search should be updated and more systematic to include the most important relevant recent studies. This is not the case. E.g. four of the five references the authors use to substantiate their statement that “Several studies have demonstrated Leishmania antigens in the urine of VL patients using different approaches, such as countercurrent immunoelectrophoresis, Western blot, latex agglutination test and ELISA” are from 1987, 1995, 2001, and 2002. However, at least two relevant VL antigenuria diagnostic studies have come out in 2018 alone. The sample size of 16 is very small and seems to be a convenience sample. Why was no systematic approach used to ensure representativeness of the sample to the target population? This weakness should be discussed and the study patient characteristics should be compared to the characteristics of the primary/relapse VL patients in the hospital. What was the rationale for including exactly 16 patients? Normally sample size should be prespecified based on estimated parameters. Please justify your sample size decision. HIV status is an important factor associated with antigen expression in urine. Please add this information from your patients if available, and if not please add HIV prevalence estimates from the target population. Time since symptom onset is also very important. Please add this data to Table 1 if available. If this information is not available, please discuss this as a limitation. For relapse patients, how was it established that patients actually had VL? Merely by oral patient history, or through patient treatment charts? If only by asking the patient, please discuss this as a limitation. Table 1 should be improved. Children and adults seem to be mutually exclusive categories, hence one of them can be removed without loss of information. All clinical and lab variables should be clearly defined in the table footnotes. Patients with primary and relapse VL episode should be presented in separate columns. Figure 1 should be improved. A box-whisker plot should be used. It is not clear what the asterixis and horizontal lines refer to. There might be a danger to accidentally de-anonymise your study patients with the data included in Dataset 1. Better remove the following variables: Date of collection, Upazila, District. You refer to Table 1 in the Methods section. However, patient characteristics are part of the results and should be referred to in that section, not earlier. The data of Table 2 should be merged with Dataset 1 to facilitate replication of your analyses. The abstract should include the number of patients included and the time when the study was conducted as well as the place where the study took place. Interquartile ranges should be be added to the median point estimates in the results section. Table 3 should show which statistical test was used for each column. Also show the absolute number of patients in the different strata and columns. The discussion needs substantial reworking:\n\n17.1. The first paragraph of the discussion section should be a succinct summary of the main findings, instead of referring to a test that was not used in this study (KAtex). 17.2. It is not clear why such prominent reference is made to the InBios Antigen Detect ELISA throughout the discussion. It played no role in this particular study. If that test was evaluated in parallel to the presented Kalon ELISA on the same patients, this should be reported together in the same manuscript. Please explain or change. 17.3. Too little reference is made to the existing body of literature. The authors should compare their concrete results to findings from other studies, and discuss how and why their findings might differ. A proper interpretation of the study findings should be presented as well. Keep elaborations about things that are unrelated to your study at a minimum. Instead, the concrete implications of the study findings should be elaborated in more detail. 17.4. Methodological strengths and in particular the weaknesses of the study should be discussed. This part is entirely absent for now. 17.5. Since the sample size was extremely small and probably not representative of the VL patients at that hospital, any conclusions should by default be very cautious. No conclusive findings, can be drawn from this study, and in particular no diagnostic decisions should be taken based on this study. The need for future, properly designed and sufficiently powered studies should be highlighted instead of suggesting that the research question is settled after having conducted this study. Currently the conclusions are too firm and too broad, given the limitations of this research.",
"responses": []
},
{
"id": "39750",
"date": "07 Dec 2018",
"name": "Farhat Afrin",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have evaluated the leishmanial antigens in urine (as a non-invasive means) in primary and relapse VL as well as in cases of treatment failure, at different time points. Though the pilot study is interesting, it is very preliminary. There are few concerns regarding the manuscript.\n\nThe sample size is too low. What is the sensitivity and specificity of the assay? Is it better than others reported so far? The authors may assay the PKDL samples as well. The authors mention the advantage of antigen detection in the urine of patients as in Leishmania/HIV co-infection. They need to complement their pilot study with few such co-infected cases. The authors also state the importance of antigen detection in urine (that reflects parasite load in the patient) over serum antibody response, as the later are detectable up to several years after cure. The authors need to validate this by including urine from cured patients in their assay.\nMinor concerns: The English language needs to be improved. For instance’ Although molecular methods, such as PCR, have shown to be effective in VL diagnosis.’",
"responses": []
},
{
"id": "41461",
"date": "27 Dec 2018",
"name": "Nahid Ali",
"expertise": [
"Reviewer Expertise Leishmania biology",
"diagnosis",
"immunology"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this study the authors have shown the use of urine in diagnostic ELISA at different time points and concluded that collection of urine in the morning gives maximum antigen concentration in Leishmania infected samples. However, the authors have not shown the ultimate advantage of their findings. There are considerable concerns necessary to be addressed before publication. Below are my point wise comments for the paper.\nThe statistical difference between three different time points is not very high. If all the three time points are distinguishing active VL from controls then what is the advantage of selecting morning samples. Authors should include the reference for this. The sample size in this study is very low. Authors should justify the selection of 16 samples. Throughout in the manuscript the authors have stated the use of the test with VL and PKDL. However they have not studied PKDL samples. Selection of only VL subjects is not sufficient to explain any parameters related to the diagnosis. Therefore study with cured samples, and healthy and other diseases controls are also important. The papers published in the last two years related to the development of urine based diagnostic assays in VL can be counted on the fingertips. Why the authors do not want to give credit to those papers? In the Introduction and Discussion section there is serious absence of rationale related to the study such as in other infectious diseases what are the papers available with sample time point study and the result they have. Overall the outcome of the study with very few and selective samples are over conclusive.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1514
|
https://f1000research.com/articles/7-1504/v1
|
20 Sep 18
|
{
"type": "Research Article",
"title": "Computational genome-wide identification of heat shock protein genes in the bovine genome",
"authors": [
"Oyeyemi O. Ajayi",
"Sunday O. Peters",
"Marcos De Donato",
"Sunday O. Sowande",
"Fidalis D.N. Mujibi",
"Olanrewaju B. Morenikeji",
"Bolaji N. Thomas",
"Matthew A. Adeleke",
"Ikhide G. Imumorin",
"Oyeyemi O. Ajayi",
"Sunday O. Peters",
"Marcos De Donato",
"Sunday O. Sowande",
"Fidalis D.N. Mujibi",
"Olanrewaju B. Morenikeji",
"Matthew A. Adeleke"
],
"abstract": "Background: Heat shock proteins (HSPs) are molecular chaperones known to bind and sequester client proteins under stress. Methods: To identify and better understand some of these proteins, we carried out a computational genome-wide survey of the bovine genome. For this, HSP sequences from each subfamily (sHSP, HSP40, HSP70 and HSP90) were used to search the Pfam (Protein family) database, for identifying exact HSP domain sequences based on the hidden Markov model. ProtParam tool was used to compute potential physico-chemical parameters detectable from a protein sequence. Evolutionary trace (ET) method was used to extract evolutionarily functional residues of a homologous protein family. Results: We computationally identified 67 genes made up of 10, 43, 10 and 4 genes belonging to small HSP, HSP40, HSP70 and HSP90 families respectively. These genes were widely dispersed across the bovine genome, except in chromosomes 24, 26 and 27, which lack bovine HSP genes. We found an uncharacterized outer dense fiber (ODF1) gene in cattle with an intact alpha crystallin domain, like other small HSPs. Physico-chemical characteristic of aliphatic index was higher in HSP70 and HSP90 gene families, compared to small HSP and HSP40. Grand average hydropathy showed that small HSP (sHSP), HSP40, HSP70 and HSP90 genes had negative values except for DNAJC22, a member of HSP40 gene family. The uniqueness of DNAJA3 and DNAJB13 among HSP40 members, based on multiple sequence alignment, evolutionary trace analysis and sequence identity dendrograms, suggests evolutionary distinct structural and functional features, with unique roles in substrate recognition and chaperone functions. The monophyletic pattern of the sequence identity dendrograms of cattle, human and mouse HSP sequences suggests functional similarities. Conclusions: Our computational results demonstrate the first-pass in-silico identification of heat shock proteins and calls for further investigation to better understand their functional roles and mechanisms in Bovidae.",
"keywords": [
"Cattle",
"bovine genome",
"heat shock proteins",
"Hsp genes",
"molecular chaperones"
],
"content": "Introduction\n\nMost newly synthesized proteins require the interplay of evolutionarily conserved protein co-factors known as molecular chaperones, activated in response to heat stress or other chemical stressors that impair cellular activity. Organisms respond to environmental stress through reprogramming leading to the production of heat shock proteins (HSPs) (Hartl & Hayer-Hartl, 2002). Effects of HSP production include maintaining cellular protein homeostasis and guiding against cellular dysfunction, increased responsiveness to stress insults, microfilament stabilization, etc. (Kregel, 2002). In addition, HSP70s and HSP40s synergistically suppress the formation of toxic proteins that drive neurodegeneration (Meriin et al., 2002).\n\nHSPs are classified into six main families (small HSPs, HSP40, HSP60, HSP70, HSP90 and HSP110), based on molecular mass (Johnson et al., 2003; Kappe et al., 2002). In addition, individual families also have subfamily differentiations, all contributing to specific functions in eukaryotes (Cheetham & Caplan, 1998). HSP40 family is grouped into three subtypes, based on the extent of domain conservation when compared to the Escherichia coli gene dnaJ (Cheetham & Caplan, 1998). HSP70s are highly conserved across many phyla, with distinctive N- and C-terminal domains interacting in an allosteric fashion (Craig et al., 2006). HSP90 genes (inducible HSP-α, and constitutive HSP-β) (Csermely et al., 1998; Hoffman & Hovemann, 1988), although located in the nucleus, express their protein function in the cytosol, endoplasmic reticulum, chloroplast and mitochondria (Emelyanov, 2002; Krishna & Gloor, 2001; Stechmann & Cavalier-Smith, 2004). The first step towards a better understanding of bovine HSPs require knowledge of the actual number of HSP genes in cattle. In this study, we identified the number, chromosomal locations and the physico-chemical properties of 67 HSP genes in the bovine genome. Evolutionarily conserved and class specific residues were inferred using evolutionary trace (ET) analysis and sequence identity dendrograms were constructed using human, mouse and cattle HSP sequences to infer functional similarity among these three species. Our results contribute to the biology of HSPs in cattle, possible application in animal breeding and further clarifies intercontinental bovine adaptation mechanisms.\n\n\nMethods\n\nWe identified all putative heat shock protein genes at the genome-wide level in cattle using published human and mouse sequences as queries. Due to the variation in HSP gene family sequences, we used three representative HSP sequences from each subfamily (sHSP, HSP40, HSP70 and HSP90) to search the Pfam (Protein family) database, for identifying exact HSP domain sequences based on the hidden Markov model (HMM) (Finn et al., 2010). The 12 query sequences are as follows: human sHSPs; NP_001531.1, NP_1499971.1 and mouse Hsp; NP_034094.1; human HSP40; NP_001530.1, NP699161.1 and mouse HSP40: NP033610.1; human HSP70 NP_005336, NP_002146.2 and mouse Hsp70 NP_084477.1; human HSP90; NP_001017963 and NP_003290.1 and mouse NP_032328.2. Pfam domain PF00011.16 (Hsp20 domain), PF00226.26 (DNA J domain), PF00012.15 (Hsp70 domain) and PF00183.13 (Hsp90 domain) were used to carry out a protein-protein BLAST search (p value = 0.001) of the non-redundant protein sequences in Bos taurus using the BLOSUM62 matrix, with an expected threshold of 10, a word size of 6, a gap cost of 11 with an extension 1, and with a conditional compositional score matrix adjustment. We acquired the starting chromosomal locations of candidate HSP genes searching through TBLASTN (p value = 0.001) in the non-redundant protein sequences for Bos taurus using the BLOSUM62 matrix, with an expected threshold of 10, a word size of 6, a gap cost of 11 with an extension 1, and with a conditional compositional score matrix adjustment and filtering the low complexity regions. BLAT searches of the UCSC database were used to confirm the chromosomal locations of the HSP sequences. Redundant sequences similarly located chromosomally were rejected. Candidate sequences fitting our criteria were analyzed in the Pfam database and detected using the SMART program (version 6), as described (Letunic et al., 2009).\n\nWe utilized the ProtParam tool (Swiss Institute of Bioinformatics) to compute potential physico-chemical parameters detectable from a protein sequence (molecular weight, theoretical isoelectric point (pI), amino acid composition, estimated half-life, aliphatic index, grand average of hydropathicity (GRAVY)) etc., as described (Gasteiger et al., 2003). All HSP sequences were submitted to this tool for physico-chemical characterization.\n\nMultiple alignments of HSP protein sequences from human, mouse and cattle were performed using ClustalW (version 2). Conserved regions in the alignment were shaded black and less conserved regions were shaded gray (Gasteiger et al., 2003).\n\nTo extract evolutionarily functional residues of a homologous protein family, we utilized the evolutionary trace (ET) method, as previously described. Multiple sequence alignment obtained from ClustalW for sHSP, HSP40 (type I and II), HSP70 and HSP90 were submitted to ET analysis Web server (Lichtarge et al., 1996), using input Protein Data Bank (PDB) files 2WJ7 (human alphaB crystallin), 1HDJ (NMR solution structure of the HDJ-1-J-domain), 1YUW (Crystal structure of bovine hsc70 (aa1-554) E213A/D214A mutants, 3Q6N (hexameric structures of Human HSP90) obtained from PDB as trace-to-structure mapping for sHSP, HSP40, HSP70 and HSP90, respectively.\n\n\nResults\n\nOur exhaustive search for HSP genes in the bovine genome using human and mouse sequences as queries resulted in the identification of 10 genes belonging to small HSPs (Table 1), 43 genes belonging to HSP40 gene family (Table 2), 10 genes belonging to the HSP70 gene family (Table 3) and 4 genes sharing the HSP90 family (Table 4). We classified a gene as belonging to sHSP gene family if it contains one or more intact alpha-crystallin domain. In addition to the list of sHSP genes (Table 1), outer dense fiber protein 1 (ODF1) was identified in the genome-wide search due to the presence of an intact alpha-crystallin domain, as confirmed by Pfam and SMART tools. sHSPs were scattered across the chromosomes, with two genes (HSPB2 and CRYAB), found on chromosome 15 observed to be approximately 5 kb apart, possibly indicative of a tandem duplication.\n\naa, amino acids; MW, molecular weight; pI, isoelectric point; AI, aliphatic index; II, instability index; GRAVY, grand average of hydropathicity index.\n\naa, amino acids; MW, molecular weight; pI, isoelectric point; AI, aliphatic index; II, instability index; GRAVY, grand average of hydropathicity index.\n\naa, amino acids; MW, molecular weight; pI, isoelectric point; AI, aliphatic index; II, instability index; GRAVY, grand average of hydropathicity index.\n\naa, amino acids; MW, molecular weight; pI, isoelectric point; AI, aliphatic index; II, instability index; GRAVY, grand average of hydropathicity index.\n\nThe 43 genes belonging to the HSP40 gene family were sub-classified depending on domain conservation into types I, II and III. A total of four HSP40 genes (DNAJA1, DNAJA2, DNAJA3 and DNAJA4) (Table 2) possessed the characteristic four canonical domains: J, Glycine-phenylalanine(G/F)-rich region, 2 zinc-finger like motifs, and the carboxyl-terminal (CTD), first observed in E coli. In total, 12 HSP40 genes were observed to belong to HSP40 type II (lack zinc-finger-like motifs), and 27 HSP40 genes were assigned to HSP40 type III due to the presence of a single J domain. Although 22 out of 29 chromosomes contained the hsp40 gene family, with the highest number of HSP40 genes found on chromosome 12 (Table 2), no HSP40 gene was found on chromosomes 14, 17, 20, 23, 26 and 27.\n\nA total of 10 HSP70 genes along with their chromosomal positions are presented in Table 3. HSP 70-1a, hsp70-1b and hsp70-1L were mapped to chromosome 23 with less than 1 kb between these genes. However, HSP 70-1a and hsp70-1b were mapped to the same region due to high sequence identity (99%) between these two genes. Similarly, four HSP90 genes in cattle were assigned to hsp90 gene family; and HSP90AA1, HSP90AB1, HSP90B1 and TRAP1 were mapped to chromosome 21, 23, 5 and 25, respectively (Table 4). Jointly considering the chromosomal locations of all the HSP genes in this study, chromosomes 24, 26 and 27 completely lack HSP genes in the bovine genome.\n\nThe physico-chemical parameters indicated that in sHSPs, pI ranged from 5.07 to 8.40, with most members of the sHSP being acidic except for HSPB9 and ODF-1 which are basic in nature (Table 1). pI for HSP40 proteins in the gene family ranged from 4.61 in DNAJC24 to 10.65 in DNAJC4, with others nestled between these two extremes (Table 2). However, when jointly considered, 20 HSP40 proteins were acidic while 23 were basic. In the HSP70 family, all members were acidic in nature, with values ranging from 5.07 (HSPA5) to 5.97 (HSPA9) (Table 3). Similar results were obtained for members of HSP90 family, with HSP90B1 having the lowest pI (4.76) while TRAP1 has the highest value (6.66) (Table 4).\n\nResults obtained for the instability index revealed that all sHSP and HSP90 family members were unstable (II>40), with majority of the HSP40 and HSP70 families very stable (II<40). Comparatively, values obtained for aliphatic index were high among sHSP, HSP40, HSP70 and HSP90 families, with higher values more pronounced in the HSP70 and HSP90 families. Similarly, results obtained from GRAVY showed sHSPs, HSP40, HSP70 and HSP90 proteins with negative values, except for DNAJC22 (Table 2), which was observed to be positive.\n\nAlignment results for sHSP gene family in cattle showed that the alpha crystallin domain is much more highly conserved than the N- and C-terminal regions. Multiple sequence alignment identified phenylalanine (F), proline (P), glycine (G), leucine (L), glycine (G) and L to be evolutionarily conserved (Figure 1). Interestingly, all these residues were found to reside in the alpha crystallin domain, which may indicate their functional importance. Type 1 and II HSP40 gene family protein sequences and the NMR solution structure of human HSP40 (1hdj), containing only the J domain, were aligned. Amino residues tyrosine (Y), L (present in helix 1); lysine (K), alanine (A), A (present in helix 2); H, P (present in the HPD loop); F, A, Y, L (present in helix 3); serine (S), arginine (R), aspartic acid (D) (present in helix 4), and G were observed to be evolutionarily conserved among type I and II members; more importantly was the fact that all these conserved residues were localized in the J domain of Hsp40 genes. The sequence motif CxxCxGxG, a characteristic of the zinc finger domain was only observed in the HSP40 type 1 gene family (Figure 2).\n\nThe amino acid residues evolutionarily conserved are shown in as the consensus at the bottom. The sites with amino acids that have the same biochemical characteristics are shown as colored boxes. Bta, Bos taurus.\n\nThe J domain is the most conserved (top region). The 4 conserved CR-type Zinc finger, characterized by the CxxCxGxG motifs, and the glycine/phenyl alanine region are also shown. The amino acid residues evolutionarily conserved are shown in as the consensus at the bottom. The sites with amino acids having the same biochemical characteristics are shown as colored boxes. Bta represents Bos taurus, and Hsa Homo sapiens.\n\nTo examine the structural context of the invariant residues in sHSP gene family, the human alpha B crystallin (2WJ7) was used as our reference structure. sHSP sequences of human, mouse and cattle utilized for evolutionary trace analysis were partitioned into four groups. Multiple sequence alignment of the consensus sequences obtained from conserved residues in each group resulted in a trace, which identified three evolutionarily functional residues (F, P, P) and five residues which appeared to be class-specific (Figure 3). Amino acid residues isoleucine (I) and valine (V), were class-specific to group 1 (ODF1) and group IV (HSPB9), respectively, while F was specific to groups 2 and 3. In addition, while the second, third and fourth-class specific residues Y, S and V were observed in group 1, residues L and G were observed to be peculiar to groups 2, 3 and 4 respectively. Interestingly, most of these evolutionarily conserved and class-specific residues were found in the alpha-crystallin domain, apart from a class-specific residue found in the N-terminal region (indicated by an arrow in Figure 3).\n\nEvery member of sHSPs was partitioned into four different groups based on the degree of conservation. Group 1 consists of human, mouse and cattle ODF1 protein. Group 2 consists of HSPB3, HSPB2, CRYAA, CRYAB, HSPB6, HSPB1 and HSPB8; Group 3 consists of HSPB7 while Group 4 consists of HSPB9. Conserved residues were colored red while class specific residues were colored green. A class specific residue was observed in the N terminal domain conferring group specificity to chaperoning functions (indicated by an arrow). Amino acid residues with triangle and star in the figure above have been identified elsewhere (Laksanalamai & Robb, 2004). The presence of an extra 21 amino acid residues in ODF1 gene which is lacking in other sHSP members is highlighted in magenta coloration. The sign “…..” represents the presence of invariant residue in each group while the sign “----” represents the absence of a residue at that position.\n\nSequence identity dendrogram of sHSPs suggests a monophyletic arrangement with ODF1 diverging first from other members of sHSPs, while HSPB2, CRYAA, CRYAB and HSPB6 appeared to have recently diverged, with CRYAA and CRYAB branching from the same node (Figure 4). In the HSP40 gene family, multiple alignment of consensus sequences were partitioned into seven groups using PDB files 1HDJ and 3AGX as reference structures; this resulted in a trace that identified residues tyrosine (Y), L, A, A, histidine (H), P, F, A, Y, L, R, aspartic acid (D) and G, as evolutionarily conserved residues with some class-specific residues nestled within the J domain (Figure 5). Sequence identity dendrogram suggest a monophyletic pattern with DNAJB14 diverging first followed by DNAJA3 (Figure 6). While using reference structures (2QW9, 1YUW) for HSP70 genes and (3Q6N, 4AWO) for HSP90 genes, evolutionary trace analysis predicted a large number of amino acid residues to be evolutionarily conserved (data not shown). Sequence identity dendrogram assumed a monophyletic pattern with HSPA4 and MT1 diverging first from other members of HSP70 and HSP90 gene families, respectively (Figure 7, Figure 8).\n\nThe first three letters Hsa, Bta and Mmu corresponds to human, bovine and mouse, respectively, followed by the gene names. The numbers represent the sequence numbers used in the evolutionary trace analysis.\n\nEvery member of HSP40 gene family were partitioned into seven different groups based on the degree of conservation. All groups consisted of human, mouse and cattle sequences. Group 1 consists of DNAJB2, DNAJB3, DNAJB6, DNAJB7 and DNAJB8. Group 2 consist of only DNAJB9; Group 3 consists of DNAJA1, DNAJA2, and DNAJA4; Group 4 consists of DNAJB11; Group 5 consists of DNAJB1, DNAJB 4, DNAJB 5 and DNAJB 13; Group 6 consist only DNAJA3 while Group 7 consist of DNAJB12 and DNAJB14. Conserved residues were colored red and they were all localized in the J domain while class specific residues were denoted by the sign “X”. The symbol “….” Represents the presence of invariant residue in each group while the symbol “----” represents the absence of a residue at that position.\n\nThe first three letters Hsa, Bta and Mmu corresponds to human, bovine and mouse, respectively. The numbers represent the sequence numbers used in the evolutionary trace analysis.\n\nThe first three letters Hsa, Bta and Mmu corresponds to human, bovine and mouse, respectively. The numbers represent the sequence numbers used in the evolutionary analysis.\n\nThe first three letters Hsa, Bta and Mmu corresponds to human, bovine and mouse, respectively. The numbers represent the sequence numbers used in the evolutionary trace analysis.\n\n\nDiscussion\n\nThe highly-conserved heat stress genes are instrumental to maintaining protein homeostasis and coordinating cellular stress responses (Keller et al., 2008). Our analysis revealed a total of 67 genes (10 sHSP, 43 HSP40, 10 HSP70 and 4 HSP90), which were believed to have occurred because of gene duplication, an event characteristic of many gene families. sHSPs are functionally known to confer protection to a variety of cellular stressors (Latchman, 2002) and notably involved in cytoskeletal rearrangements (Quinlan, 2002) and apoptosis (Arrigo et al., 2002). A search of all bovine genes that code for alpha crystallin-related sHSPs identified 10 sHSP-like proteins in the cattle genome, with the identification of a previously unidentified ODF1, which was newly observed to be present in cattle, but reported in humans (Kappe et al., 2003) and could possibly be present in other species. In this study, the distribution of sHSP genes in bovines was observed to be dispersed over nine chromosomes (Table 1) and similar results have been reported in humans (Kappe et al., 2003) suggesting the conservation of sHSP genes in the two species’ common ancestor.\n\nComputational analysis assessing the physico-chemical properties of proteins in the gene families is crucial to understanding the functions of the protein encoded by genes in vitro. In this study, the pI was observed to be acidic for most of the cattle sHSPs except for HSPB9 and ODF1, which were basic. These observations might be indicative of functional differences of HSPB9 and ODF1 compared to other members as similar finding might suggest possibly different roles (Korber et al., 2000). In any case, the in vivo functional assessment of these sHSPs in cattle is necessary before making valid conclusions.\n\nIn the sHSP gene family, the aliphatic index was high (> 65), implying that cattle sHSP genes possess thermal stability, a feature consistent with its protective function in preventing cellular damage during heat stress (Collier et al., 2006). Among sHSP genes, GRAVY results suggested that proteins encoded by these genes are hydrophilic, which may enhance its functional ability to oligomerize and its subsequent binding abilities to different proteins (Lanneau et al., 2008; Roy et al., 2011).\n\nMultiple sequence alignment (MSA) of homologous sequences offers a wealth of information by identifying conserved residues crucial to the function or structure of related proteins (Capra & Singh, 2007). Evolutionary trace analysis not only helps in identification of evolutionarily conserved residues but also putatively identify functionally important internal and external residues, potentially contributing to cell integrity and enzymatic activity respectively (Lichtarge et al., 1996). Among cattle sHSP genes, MSA identified most of the evolutionarily conserved residues in the alpha crystallin domain while the amino terminal and carboxylic terminal sequences were deficient of invariant residues. Conservation of the structural architecture of sHSPs in several species has been demonstrated (Caspers et al., 1995; Kim et al., 1998), with Haslbeck et al. (2005) reporting that the N and C termini, though variable in sequence and length, are essential in preventing the misfolding of proteins, this observation neatly validated by our findings.\n\nET analysis was also utilized in the identification of invariant and class-specific residues and results obtained suggested that a single class-specific residue was observed in the N-terminal region, while four were observed in the alpha crystallin domain when homologous sequences of human, mouse and cattle were included in the ET analysis (Figure 6). In the alpha crystallin domain, and as revealed by ET results, the exclusion of the distant member ODF1 gene identified LxxxGxL as one of the conserved motifs shared among human, mouse and cattle sHSP sequences, while residues YxxxSxV are class-specific of the ODF1 gene. In addition, downstream of the YxxxSxV motif was the presence of approximately 21 residues present only in the ODF1 gene and this might suggest a structural and functional differences between the ODF1 gene and other sHSPs in cattle. Interestingly, the sequence identity dendrogram also observed the separation of the ODF1 gene prior to the divergence of other members of sHSP, thus further strengthening our previous hypothesis of a possible existence of functional and structural differences between ODF1 and other members of the sHSP family. In a related study, several authors reported AxxxxGxL as the most conserved motif in the alpha crystallin domain (Caspers et al., 1995; Narberhaus, 2002); however, in the MSA of cattle sHSP sequences, it appears that the AxxxxGxL motif has been replaced with LxxxGxL motif. One plausible reason could be that the A residue in cattle may not be essential to cattle sHSPs chaperoning or substrate recognition functions. The two-residue region (LP) observed in extremophiles (Laksanalamai & Robb, 2004) were also identified in our study upstream the LxxGxL motif, although residue P was identified to be much more highly conserved than residue L (Figure 6). That said, the in vivo roles of these residues remain to be verified, and one useful approach is to carry out an in-vivo site-directed mutagenic study.\n\nThe HSP40 gene family is a large family that is structurally classified into 3 subtypes (Cheetham & Caplan, 1998) and functionally characterized based on their role as co-chaperones in binding and regulating the activity of HSP70s (Jiang et al., 1997). A total of 43 putative HSP40 members were identified in this study and they were scattered across the genome although 41 J-domain containing proteins were reported in humans (Qiu et al., 2006). The large number of genes identified in the HSP40 family could be adduced to its functional mediatory role in stabilizing the interaction between HSP70 and a myriads of substrates (Muchowski & Wacker, 2005) in different cellular components to meet cellular goals. GRAVY results of HSP40 suggest hydrophilic tendencies, except for DNAJC22, which appears to be hydrophobic. In addition, while some members appear to be acidic based on their pI values, others possess basic properties thus, suggesting functional differences that could be useful in wet lab experiments.\n\nDNAJ/HSP40 family members contain the J domain, facilitating binding to HSP70s, although other domains have been identified that are critical to their functions (Kota et al., 2009). MSA results identified evolutionarily conserved residues that are plausibly significant to the overall activity of the J domains or the preservation of its structural integrity. The identification of evolutionarily conserved residues only in the J domain could be consistent with the conservative nature of the J domain in comparison to other domains. The sequence alignment of the HSP40 type I and II homologs in cattle predicted the presence of cysteine repeats which was observed only in DNAJA1, DNAJA2, DNAJA3 and DNAJA4 (HSP40 type I) sequences. This finding as observed in humans, were consistent with the hypothesis that structurally, the presence of the cysteine repeats in HSP40 type I clearly distinguishes it from HSP40 type II and type III (Shi et al., 2005). The HPD motif present between helix 2 and 3 is reported to mediate the interaction between HSP40 and HSP70 due to the high degree of conservation (Greene et al., 1998).\n\nIn DNAJB13, the cysteine repeats was observed to have been replaced with the HPL motif and this tri-peptide motif was observed to be present in human, mouse and cattle DNAJB13, suggesting that this mutation has occurred before the divergence of these three species. Given the reported functional loss of DNAJ due to H33Q mutation (Cajo et al., 2006) and SEC63P due to P156N and D157A mutation (Feldheim et al., 1992), it is unclear whether the motif change from HPD to HPL in DNAJB13 would still enable this gene to perform its HSP70 ATPase activity. That said, the extragenic suppressor analysis of a DNAJ D35N mutant of the HPD motif was reported to cause defective growth and this anomaly was alleviated by the spontaneous mutations of DNAK (hsp70 in E. coli) at R167 (Suh et al., 1998).\n\nET analysis involving HSP type I and II genes in humans, mice and cattle identified some evolutionarily conserved residues which are consistent with our observations during multiple sequence alignment. Although some class-specific residues were observed, all the invariant residues found in the J domain among orthologous sequences could be suggestive of their functional importance to regulate HSP70 ATPase activity or to ensure protein stability in vivo, including tethering DNAK (a member of HSP70) to DNAJ (HSP40)-bound substrates (Greene et al., 1998). An interesting observation was in the sequence identity dendrogram involving humans, mice and cattle putative HSP40 type I and II sequences. One would have thought that the members of HSP type I (DNAJA1, DNAJA2, DNAJA3, DNAJA4) would have clustered together because of the presence of the four canonical domains; however, DNAJA3 diverged earlier than expected when compared to other members which recently diverged. In addition, DNAJA3 did not cluster with any other HSP40 sequences and diverged from the tree after the divergence of the DNAJB14 and DNAJB12 clade. Although, the reason for this observation remains unknown at this moment but could be indicative of functional divergence of DNAJA3. It therefore appears that DNAJA3 performs its chaperoning function in a way and manner that is different compared to other HSP40 type 1 and 2 gene family, given the sudden shift of the pI of the DNAJA3 proteome toward basic values when compared to other HSP type I members. In any case, more research needs to be done to functionally verify these speculations.\n\nA total of 10 members of HSP70 were observed in cattle with the invariant residues mostly found in the nucleotide binding domain where HSP70 interacts with HSP40 J domains. Interestingly, isoelectric point (pI) for all the HSP70 protein sequences were all predicted to be acidic with very little variation between the isoelectric point values of HSP70 protein sequences. This could possibly suggest a functional similarity among the cattle HSP70, further confirmed by reports of conserved functional properties of HSP70 protein across species (Angelidis et al., 1996; Li et al., 1991). All bovine HSP70 protein sequences appear to be hydrophilic based on the GRAVY and the high values predicted in aliphatic index was suggestive of its thermal stability which is consistent with its chaperoning role in achieving HSP70-mediated protection against stresses that causes protein denaturation (Bukau et al., 2006). In the sequence identity dendogram, it was observed that HSPA4 diverged first, followed by HSPA14, while the instability index was predicted to be unstable (II>40) for HSPA4 (45.24) and HSPA14 (44.95), presumably an indication of functional similarities between HSPA4 and HSPA14; that said, one cannot rule out the fact that one needs to conduct more experiments in order to gain mechanistic insights before valid inferences can be made.\n\nHSP90 is an abundant and highly conserved molecule, whose constitutive forms (HSP90AA1, HSP90AB1, HSP90B1 and the mitochondrial TRAP1) possesses acidic properties. High values were also recorded in the aliphatic index, with the highest value occurring in TRAP1, indicative of high thermal stability when compared to the constitutive forms. Similarly, the stability of HSP90 in vitro, as predicted by the instability index and hydrophilic properties inferred from GRAVY results, are useful information that could be utilized in wet lab experiments. The high level of sequence conservation revealed in MSA and ET analysis both in the N- and C-terminal regions suggests these two regions play a crucial role in HSP90 chaperone functions. The sequence identity dendrogram revealed that TRAP1 diverged first, while the constitutive forms of HSP90 were grouped together in a single clade. This result might be consistent with the fact that TRAP1 primarily functions in the mitochondria while the other members, which makes up the constitutive forms, function in the cytosol.\n\n\nData availability\n\nAll GenBank accession numbers of the bovine sequences used in this study are detailed in Table 1–Table 4.",
"appendix": "Grant information\n\nWe are thankful for financial support by the College of Agriculture and Life Sciences, Cornell University, Ithaca, NY and Zoetis, Inc. Additional support by National Research Initiative Competitive Grant Program (Grant No. 2006-35205-16864) from the USDA National Institute of Food and Agriculture; USDA-NIFA Research Agreements (Nos. 2009-65205-05635, 2010-34444-20729) and USDA Federal formula Hatch funds appropriated to the Cornell University Agricultural Experiment Station are gratefully acknowledged. OOA was supported by a Norman Borlaug Leadership Enhancement in Agriculture Program fellowship from the US Agency for International Development.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAngelidis CE, Nova C, Lazaridis I, et al.: Overexpression of hsp70 in transgenic mice results in increased cell thermotolerance. Transgenics. 1996; 2: 111–117.\n\nArrigo AP, Paul C, Ducasse C, et al.: Small stress proteins: novel negative modulators of apoptosis induced independently of reactive oxygen species. Prog Mol Subcell Biol. 2002; 28: 185–204. PubMed Abstract | Publisher Full Text\n\nBukau B, Weissman J, Horwich A: Molecular chaperones and protein quality control. Cell. 2006; 125(3): 443–451. PubMed Abstract | Publisher Full Text\n\nCajo GC, Horne BE, Kelley WL, et al.: The role of the DIF motif of the DnaJ (Hsp40) co-chaperone in the regulation of the DnaK (Hsp70) chaperone cycle. J Biol Chem. 2006; 281(18): 12436–4. PubMed Abstract | Publisher Full Text\n\nCapra JA, Singh M: Predicting functionally important residues from sequence conservation. Bioinformatics. 2007; 23(15): 1875–1882. PubMed Abstract | Publisher Full Text\n\nCaspers GJ, Leunissen JA, de Jong WW: The expanding small heat-shock protein family, and structure predictions of the conserved ‘‘alpha-crystallin domain’’. J Mol Evol. 1995; 40(3): 238–248. PubMed Abstract | Publisher Full Text\n\nCheetham ME, Caplan AJ: Structure, function and evolution of DnaJ: Conservation and adaptation of chaperone function. Cell Stress Chaperones. 1998; 3(1): 28–36. PubMed Abstract | Free Full Text\n\nCollier RJ, Stiening CM, Pollard BC, et al.: Use of gene expression microarrays for evaluating environmental stress tolerance at the cellular level in cattle. J Anim Sci. 2006; 84(Suppl): E1–13. PubMed Abstract | Publisher Full Text\n\nCraig EA, Huang P, Aron R, et al.: The diverse roles of J-proteins, the obligate Hsp70 co-chaperone. Rev Physiol Biochem Pharmacol. 2006; 156: 1–21. PubMed Abstract | Publisher Full Text\n\nCsermely P, Schnaider T, Sőti C, et al.: The 90-kDa molecular chaperone family: structure, function, and clinical applications.A comprehensive review. Pharmacol Ther. 1998; 79(2): 129–168. PubMed Abstract | Publisher Full Text\n\nEmelyanov VV: Phylogenetic relationships of organellar Hsp90 homologs reveal fundamental differences to organellar Hsp70 and Hsp60 evolution. Gene. 2002; 299(1–2): 125–133. PubMed Abstract | Publisher Full Text\n\nFeldheim D, Rothblatt J, Schekman R: Topology and functional domains of Sec63p, an endoplasmic reticulum membrane protein required for secretory protein translocation. Mol Cell Biol. 1992; 12(7): 3288–3296. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFinn RD, Mistry J, Tate J, et al.: The Pfam protein families database. Nucleic Acids Res. 2010; 38(Database Issue): D211–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGasteiger E, Gattiker A, Hoogland C, et al.: ExPASy: the proteomics server for in-depth protein knowledge and analysis. Nucleic Acids Research. 2003; 31(13): 3784–3788. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGreene MK, Maskos K, Landry JS: Role of the J-domain in the cooperation of hsp40 with hsp70. PNAS. 1998; 95(11): 6108–6113. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHartl FU, Hayer-Hartl M: Molecular chaperones in the cytosol: from nascent chain to folded protein. Science. 2002; 295(5561): 1852–1858. PubMed Abstract | Publisher Full Text\n\nHaslbeck M, Franzmann T, Weinfurtner D, et al.: Some like it hot: the structure and function of small heat-shock proteins. Nat Struct Mol Biol. 2005; 12(10): 842–846. PubMed Abstract | Publisher Full Text\n\nHoffmann T, Hovemann B: Heat-shock proteins, Hsp84 and Hsp86, of mice and men: two related genes encode formerly identified tumour-specific transplantation antigens. Gene. 1988; 74(2): 491–501. PubMed Abstract | Publisher Full Text\n\nJiang RF, Greener T, Barouch W, et al.: Interaction of auxilin with the molecular chaperone, Hsc70. J Biol Chem. 1997; 272(10): 6141–6145. PubMed Abstract | Publisher Full Text\n\nJohnson RB, Fearon K, Mason T, et al.: Cloning and characterization of the yeast chaperonin HSP60 gene. Gene. 2003; 84(2): 295–302. PubMed Abstract | Publisher Full Text\n\nKappé G, Franck E, Verschuure P, et al.: The human genome encodes 10 alpha-crystallin-related small heat shock proteins: hspB1-10. Cell Stress Chaperones. 2003; 81(1): 53–61. PubMed Abstract | Free Full Text\n\nKappe G, Leunissen JA, de Jong WW: Evolution and diversity of prokaryotic small heat shock proteins. Prog Mol Subcell Biol. 2002; 28: 1–17. PubMed Abstract | Publisher Full Text\n\nKeller JM, Escara-Wilke JF, Keller ET: Heat stress-induced heat shock protein 70 expression is dependent on ERK activation in zebrafish (Danio rerio) cells. Comp Biochem Physiol A Mol Integr Physiol. 2008; 150(3): 307–314. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim KK, Kim R, Kim SH: Crystal structure of a small heat-shock protein. Nature. 1998; 394(6693): 595–599. PubMed Abstract | Publisher Full Text\n\nKorber P, Stahl JM, Nierhaus KH, et al.: Hsp15: a ribosome-associated heat shock protein. EMBO J. 2000; 19(4): 741–748. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKota P, Summers DW, Ren HY, et al.: Identification of a consensus motif in substrates bound by a Type I Hsp40. Proc Natl Acad Sci U S A. 2009; 27(27): 11073–11078. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKregel KC: Heat shock proteins: modifying factors in physiological stress responses and acquired thermotolerance. J Appl Physiol (1985). 2002; 92(5): 2177–2186. PubMed Abstract | Publisher Full Text\n\nKrishna P, Gloor G: The Hsp90 family of proteins in Arabidopsis thaliana. Cell Stress Chaperones. 2001; 6(3): 238–246. PubMed Abstract | Free Full Text\n\nLaksanalamai P, Robb FT: Small heat shock proteins from extremophiles: a review. Extremophiles. 2004; 8(1): 1–11. PubMed Abstract | Publisher Full Text\n\nLanneau D, Brunet M, Frisan E, et al.: Heat shock proteins: essential proteins for apoptosis regulation. J Cell Mol Med. 2008; 12(3): 743–761. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLatchman DS: Protection of neuronal and cardiac cells by HSP27. Prog Mol Subcell Biol. 2002; 28: 253–265. PubMed Abstract | Publisher Full Text\n\nLetunic I, Doerks T, Bork P: SMART 6: recent updates and new developments. Nucleic Acids Res. 2009; 37(Database issue): D229–232. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi GC, Li L, Liu YK, et al.: Thermal response of rat fibroblasts stably transfected with the human 70-kDa heat shock protein-encoding gene. Proc Natl Acad Sci U S A. 1991; 88(5): 1681–1685. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLichtarge O, Bourne HR, Cohen FE: An evolutionary trace method defines binding surfaces common to protein families. J Mol Biol. 1996; 257(2): 342–358. PubMed Abstract | Publisher Full Text\n\nMeriin AB, Zhang X, He X, et al.: Huntington toxicity in yeast model depends on polyglutamine aggregation mediated by a prion-like protein Rnq1. J Cell Biol. 2002; 157(6): 997–1004. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuchowski PJ, Wacker JL: Modulation of neurodegeneration by molecular chaperones. Nat Rev Neurosci. 2005; 6(1): 11–22. PubMed Abstract | Publisher Full Text\n\nNarberhaus F: Alpha-crystallin-type heat shock proteins: socializing minichaperones in the context of a multichaperone network. Microbiol Mol Biol Rev. 2002; 66(1): 64–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQiu XB, Shao YM, Miao S, et al.: The diversity of the DnaJ/Hsp40 family, the crucial partners for Hsp70 chaperones. Cell Mol Life Sci. 2006; 63(22): 2560–2570. PubMed Abstract | Publisher Full Text\n\nQuinlan R: Cytoskeletal competence requires protein chaperones. Prog Mol Subcell Biol. 2002; 28: 219–234. PubMed Abstract | Publisher Full Text\n\nRoy S, Maheshwari N, Chauhan R, et al.: Structure prediction and functional characterization of secondary metabolite proteins of Ocimum. Bioinformation. 2011; 6(8): 315–319. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShi YY, Hong XG, Wang CC: The C-terminal (331–376) sequence of Escherichia coli DnaJ is essential for dimerization and chaperone activity: a small angle X-ray scattering study in solution. J Biol Chem. 2005; 280(24): 22761–22768. PubMed Abstract | Publisher Full Text\n\nStechmann A, Cavalier-Smith T: Evolutionary origins of Hsp90 chaperones and a deep paralogy in their bacterial ancestors. J Eukaryot Microbiol. 2004; 51(3): 364–373. PubMed Abstract | Publisher Full Text\n\nSuh WC, Burkholder WF, Lu CZ, et al.: Interaction of the Hsp70 molecular chaperone, DnaK, with its cochaperone DnaJ. Proc Natl Acad Sci U S A. 1998; 95(26): 15223–15228. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "39552",
"date": "23 Oct 2018",
"name": "José Luis Martínez-Guitarte",
"expertise": [
"Reviewer Expertise Molecular and cellular environmental toxicology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article Ajayi et al. localize 67 different heat shock proteins in the genome of Bos taurus using human and mouse proteins and conserved motifs of each family. The proteins identified belong to the small HSP, HSP40, HSP70, and HSP90 groups. The ORF of each protein is analyzed to know its physicochemical properties and an evolutionary study is carried out. The different protein sequences are compared to get conclusions about the conservation and the main characteristics of each set of proteins.\nThe approach is adequate but I have some questions. The first one is the source of the B. taurus genome. The authors do not explain at any moment which database was used to obtain the genome. Most of the sequences are included in the NCBI protein database, so why did they decide to search directly in the genome before checking the database? On the other hand, they only analyzed the sequences of four families; why did they not include the hsp10, hsp60, and hps110 groups? About the results, it is striking that some proteins that show some of the motifs did not appear in the search. For example, gp96 is a protein with an Hsp90 motif so it is surprising that in a search with Hsp90, Pfam did not give any similarity. Some are similar with other proteins related with the HSP70 family (e.g. HSP12A) due to the presence of an HSP70 motif.\nAbout the results and discussion, they are mainly focused on the protein sequences and the differences and similarities of amino acids. I miss some discussion related to the gene structure (presence/absence of introns) and related to the transcriptional activity information (what is known from some of the genes described here). I understand that the number of genes is high and it is hard to discuss everything but, in some cases, this information can be helpful in relation to the putative function of the protein. For example, the authors seem to assume that sHSPs are related to stress but many of them have a role in processes like development. It should be taken in account when they are divided into different groups.\nAs stated before, I miss some information related to the genes. It would be interesting to incorporate in the tables the position in the chromosomes since the authors have located each gene in the chromosome. In the sHSPs it would be interesting to know if they are close since it has been proposed that duplication is a putative mechanism of increasing the number of members of this family and, often, two related sHSPs are in head-to-head or head-to-tail positions. Also it will be helpful to include the existence of introns in the gene, as it could help to find homologs in other species.\nAbout the tree, why did the authors not include an example of an external group?\nOverall the article is technically sound and it is a first step to describe more deeply the heat shock protein set in this species. It is important that the authors remark that they are only analyzing four of the families, maybe the most important, but for example, they did not include the Hsp60 and Hsp10 families that are mitochondrial. On the other hand, the main weakness, in my opinion, is to discuss the results without taking into consideration any functional information (especially for sHSPs and HSP70) because it could help to associate putative physicochemical properties with an inducible role.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "39863",
"date": "12 Nov 2018",
"name": "Ankit K. Rochani",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAjayi et al. present an important technically sound in silico data about the genes that code for sHsp, Hsp40, Hsp70 and Hsp90 from Bos Taurus. It can be accepted as it is after considering the following minor suggestions:\nAuthors are requested to justify the reason for restricting their analysis to only sHsp, Hsp40, Hsp70 and Hsp90.\n\nIt is unclear why the searches for Hsp90 were unable to identify genes for GRP94.\n\nAuthors can keep all the identified genes in Tables 1 to 4 in the supplementary figure.\n\nAuthors are requested to briefly provide comments on the physico-chemical values for the Hsps from bovine in comparison to humans and mouse.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "40818",
"date": "22 Nov 2018",
"name": "Anurag Sharma",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the manuscript entitled ‘Computational genome-wide identification of heat shock protein genes in the bovine genome’, the authors have used in the silico approach to identify Hsps in the bovine genome. The authors identified 67 genes belonging to four Hsp families. The manuscript is well written and the observations do support the conclusions.\nHowever, the authors should address the following concerns:\nThe purpose of the study should be more clear in the introduction.\n\nHave the authors checked the Heat Shock Factors similarities with human or mouse?\n\nThe authors should discuss the functional co-relation of the identified Hsps.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1504
|
https://f1000research.com/articles/7-1501/v1
|
20 Sep 18
|
{
"type": "Research Article",
"title": "Validation of the Polar RS800CX for assessing heart rate variability during rest, moderate cycling and post-exercise recovery",
"authors": [
"Kyriakos I. Tsitoglou",
"Yiannis Koutedakis",
"Petros C. Dinas",
"Kyriakos I. Tsitoglou",
"Yiannis Koutedakis"
],
"abstract": "Background: Heart rate variability (HRV) is an autonomic nervous system marker that provides reliable information for both disease prevention and diagnosis; it is also used in sport settings. We examined the validity of the Polar RS800CX heart rate monitor during rest, moderate cycling, and recovery in considering the total of 24 HRV indices. Method: A total of 32 healthy males (age=24.78±6.87 years, body mass index=24.48±3.13 kg/m2) completed a session comprised by three 20-minute time periods of resting, cycling at 60% of maximal heart rate, and recovery using a Polar RS800CX and an electrocardiogram (ECG) monitors. The HRV indices included time-domain, frequency-domain, Poincaré plot and recurrence plot. Bland–Altman plot analysis was used to estimate agreement between Polar RS800CX and ECG. Results: We detected significant associations (r>0.75, p<0.05) in all HRV indices, while five out of 24 HRV indices displayed significant mean differences (p<0.05) between Polar RS800CX and ECG during the resting period. However, for the exercise and recovery periods, we found significant mean differences (p<0.05) in 16/24 and 22/24 HRV indices between the two monitors, respectively. Conclusion: It is concluded that Polar RS800CX is a valid tool for monitoring HRV in individuals at resting conditions, but it displays inconsistency when used during exercise at 60% of maximal heart rate and recovery periods.",
"keywords": [
"Heart Rate",
"HR",
"HRV",
"Polar RS800CX",
"Electrocardiogram",
"ECG"
],
"content": "Introduction\n\nVariations in successive heart rate (HR) and RR intervals [the peak of the Q, R, and S waves of the electrocardiogram (ECG)] simultaneously are described as heart rate variability (HRV), which is the conventionally accepted term to portray variations of RR intervals1. HRV is an autonomous nervous system (ANS) marker that may provide reliable information for both disease prevention and diagnosis2,3, while it is frequently applied to sport settings4,5. Furthermore, HRV can be used as a tool to identify physiological6,7 and psychological8,9 disorders, while it has been utilised for diagnosis in both clinical10 and non-clinical studies11.\n\nRegarding sport settings, HRV is primarily utilized to determine training loads12–14 and endurance training adaptation15,16. The wide use of HRV in both clinical and basic research as a diagnostic criterion has resulted in increased production of HRV-related equipment and software. Reference gold standard such as Power Lab (AD Instruments, Australia) and Reynolds Pathfinder program (Reynolds Medical Limited, United Kingdom) were developed and used extensively when compared with other HR monitors (i.e. Polar)17. However, most of these innovations present disadvantages, such as difficulty of access and high cost18. To address these issues, more practical and cost-effective HRV tools were developed. The Polar RS800CX HR monitor (Polar Electro, Finland) was presented19–21 as a valid HR monitor for HRV analysis during rest and stress conditions (e.g. exercise). To date, however, the wide spectrum of HRV indices (i.e. time and frequency domain, Poincaré and recurrence plot) has not been tested for validation in the Polar RS800CX. Also, the performance of Polar RS800CX in post-stress conditions has not been extensively investigated to date. Therefore, the purpose of this study was to assess the validity of Polar RS800CX in a large spectrum of HRV indices by comparing it with results gained using an ECG monitor during rest, exercise (cycling) and recovery.\n\n\nMethods\n\nAn informed written consent form was signed by and obtained from 32 apparently healthy males [age: 24.78±6.87, body mass index (BMI): 24.48±3.13 kg/m2] with no history of respiratory, metabolic, or cardiovascular conditions. Participants were recruited via flyers from the university population and the local community in Trikala, Thessaly, Greece between June and November 2012. Participants who responded to our flyer advertisement were then interviewed to determine eligibility and they were informed of all experimental procedures, associated risks, and discomforts, before providing written informed consent. The sample size was a sample of convenience. We included only male participants to avoid discomfort for females due to potential menstrual cycle. Ethical approval was obtained from the Ethics Review Board of the University of Thessaly (Protocol no. 469).\n\nAll participants visited the Environmental Physiology Laboratory in the Department of Exercise Science only once; they were instructed to refrain from food, caffeine and strenuous exercise 12 hours prior to the visit. Participants arrived at the physiology laboratory between 7–8 am and they were subject to height (cm) and weight (kg) measurements via a Seca 220 (Hamburg, Germany) device. Subsequently, both a Polar RS800CX (Polar Electro Ov, Kempele, Finland) and a 12-lead ECG (Welch Allyn, CardioPerfect, New York, USA) monitors were adjusted to participants who then remained in a supine position on a comfortable bed and rested for 20 minutes in a quiet room at thermo-neutral conditions (22–24°C and 40–60% relative humidity). Immediately after the 20 minutes of resting, participants performed an aerobic exercise session on a cycle-ergometer (Monark, Ergomedic) at 60% of their maximum HR for 20 minutes. The maximum HR was calculated using Karvonen’s formula22: [(220 – age) – resting HR] * 0.60 + resting HR]. At the end of the exercise period, the participants rested in a supine position for another 20 minutes for the post-exercise recovery period. To avoid any displacement, an investigator continuously checked the position of both the chest belt of the Polar RS800CX and the electrodes of the ECG, throughout the experiment. Data were collected throughout the experimental trial using both a Polar RS800CX and an ECG, as previously described23,24. For the Polar RS800CX the data downloaded and saved in a text format via Polar ProTrainer 5 software, while the ECG data were collected via the Welch Allyn, CardioPerfect Workstation 1.6.6 software.\n\nThe raw data of the RR intervals of both the Polar RS800CX and the ECG were analysed using Premium Kubios HRV Analysis Software v1.1 (Biomedical Signal Analysis Group, University of Kuopio, Finland 2002). The retrieved HRV indices covered the time domain, frequency domain, Poincaré plot and recurrence plot indices (Table 1–Table 3).\n\nValues for ECG and RS800CX presented as mean ± standard deviation.\n\n*Significant association between ECG and Polar RS800 CX. #Significant mean differences between ECG and Polar RS800 CX. ‡n=32. ECG, electrocardiogram; MeanRR [ms], standard deviation of all RR intervals; STDRR [ms], standard deviation of normal to normal R-R intervals; MeanHR [1/min], The mean heart rate; STDHR [1/min], Standard deviation of instantaneous heart rate values; RMSSD [ms], root mean square of differences; pNN50, proportion of differences between adjacent NN intervals of more than 50ms; NN50 [beats], number of NN intervals that differ more than 50 ms; HRV triangular index [-], The integral of the RR interval histogram divided by the height of the histogram; TINN [ms], Baseline width of the RR interval histogram; LF (ms2), low frequency; HF (ms2), high frequency; LF/HF [-], Ratio LF [ms2]/HF [ms2]; SD1 [ms], represents the dispersion of the points along the line of identity and is thought to be an index of the instantaneous beat-to-beat variability of the data; SD2 [ms], represents the dispersion of the points along the line of identity and is thought to represent the slow variability of heart rate; SampIEn [-], complexity of NN series; ApEn, approximate entropy of the complexity or irregularity of the signal; DFA, Detrended fluctuation analysis of the correlation within the HRV signal divided into short-term and long-term fluctuations; Lmin (beats), mean line length; Lmax (beats), maximum line length; REC [%], recurrence rate; DET [%], determinism; ShanEn [-], shannon entropy consider the lengths of the diagonal lines; D2 [-], correlation dimension.\n\nNormal distribution was checked via Shapiro-Wilk test. Due to non-normal distribution, a two-step transformation was used to normalise all HRV variables, given that the Bland–Altman method requires normally distributed data25. Pearson’s correlation coefficient was employed to assess the associations and paired-sample t-tests to calculate the mean differences of HRV indices between the Polar RS800CX and the ECG during rest, exercise and recovery. The Bland–Altman plots and the 95% limits of agreement (95%LoA) were used to calculate agreement for all HRV indices during rest, exercise and recovery. We also calculated effect sizes between the Polar RS800CX and the ECG via the Cohen’s D pooled effect size analysis for each HRV index during rest, exercise and recovery. We estimated the error rate of Mean RR intervals during rest, cycling and recovery, with the following equation: [(Mean RR ECG – Mean RR RS800CX)/Mean RR ECG] × 100. Missing data were removed from the analysis, given that they were missing at random. The statistical analysis was completed using IBM SPSS v24 and the level of significance was set at p<0.05.\n\n\nResults\n\nThe results of the Pearson correlation coefficient, paired-sample t-tests, 95%LoA and Cohen’s D pooled effect size analyses appear in Table 1 for the resting period, Table 2 for the exercise period and Table 3 for the recovery period. Dataset 1 contains all raw data obtained using both measurement methods26. The Bland–Altman plots for the resting, exercise and recovery periods, can be found in Supplementary File 1. Missing values were removed from the analysis and the final number of participants for each HRV index appears in Table 1 for the resting period, Table 2 for the exercise period and Table 3 for the recovery period.\n\nValues for ECG and Polar RS 800CX presented as mean ± standard deviation.\n\n*Significant association between ECG and Polar RS800 CX. #Significant mean differences between ECG and Polar RS800 CX. ‡n=32 unless indicated. ECG, electrocardiogram; MeanRR [ms], standard deviation of all RR intervals; STDRR [ms], standard deviation of normal to normal R-R intervals; MeanHR [1/min], The mean heart rate; STDHR [1/min], Standard deviation of instantaneous heart rate values; RMSSD [ms], root mean square of differences; pNN50, proportion of differences between adjacent NN intervals of more than 50ms; NN50 [beats], number of NN intervals that differ more than 50 ms; HRV triangular index [-], The integral of the RR interval histogram divided by the height of the histogram; TINN [ms], Baseline width of the RR interval histogram; LF (ms2), low frequency; HF (ms2), high frequency; LF/HF [-], Ratio LF [ms2]/HF [ms2]; SD1 [ms], represents the dispersion of the points along the line of identity and is thought to be an index of the instantaneous beat-to-beat variability of the data; SD2 [ms], represents the dispersion of the points along the line of identity and is thought to represent the slow variability of heart rate; SampIEn [-], complexity of NN series; ApEn, approximate entropy of the complexity or irregularity of the signal; DFA, Detrended fluctuation analysis of the correlation within the HRV signal divided into short-term and long-term fluctuations; Lmin (beats), mean line length; Lmax (beats), maximum line length; REC [%], recurrence rate; DET [%], determinism; ShanEn [-], shannon entropy consider the lengths of the diagonal lines; D2 [-], correlation dimension.\n\nDuring the resting period, Polar RS800CX showed significant correlations (r>0.75, p<0.05) with the ECG in all studied HRV indices. We found one time domain (RMSSD), one frequency domain (LF/HF) and three Poincaré plot (SD1, SD2, ApEn) HRV indices, to show significant mean differences (p<0.05) between Polar RS800CX and the ECG. Also, one time domain (RMSSD), one frequency domain (LF/HF) and one Poincaré plot (SD1) HRV indices showed small effect sizes (0.28–0.45) between the Polar RS800CX and the ECG (Table 1). Finally, during rest, the error rate of Mean RR intervals obtained from Polar RS800CX and the ECG, was 0.3%.\n\nIn the exercise period, one time domain (STDRR) (r=0.42, p<0.05), three Poincaré plot [SamplEn (r=0.56, p<0.05), ApEn (r=0.65, p<0.05), DFAa2 (r=0.55, p<0.05)] and two recurrence plot [Lmin (r=0.51, p<0.05), DET (r=0.47, p<0.05)] HRV indices showed significant correlations between Polar RS800CX and the ECG. However, we detected seven out of nine time domain, five out of six Poincaré plot and two out of six recurrence plot HRV indices to display significant mean differences (p<0.05) between Polar RS800CX and the ECG. Small to large effect sizes (0.23–2.61) found in 20 out of the 24 examined HRV indices between Polar RS800CX and the ECG (Table 2). Finally, during exercise, the error rate of Mean RR intervals obtained from the Polar RS800CX and the ECG, was 28.1%.\n\nDuring the recovery period, two time domain (RMSSD, pNN50) and three recurrence plot (Lmin, REC, DET) HRV indices showed significant correlations (r=0.39–0.59, p<0.05) between Polar RS800CX and the ECG. In total, eight out of nine time domain, all the frequency domain, five out of six Poincaré plot and all the recurrence plot HRV indices showed significant mean differences (p<0.05) between Polar RS800CX and the ECG. All the HRV indices showed small to large effect sizes (0.42–2.99) between Polar RS800CX and the ECG (Table 3). Finally, during recovery, the error rate of Mean RR intervals obtained from Polar RS800CX and the ECG, was 68.3%.\n\nValues for ECG and Polar RS 800CX presented as mean ± standard deviation, n=number of cases.\n\n*Significant association between ECG and Polar RS800 CX. #Significant mean differences between ECG and Polar RS800 CX. ‡n=32. MeanRR [ms], standard deviation of all RR intervals; STDRR [ms], standard deviation of normal to normal R-R intervals; MeanHR [1/min], The mean heart rate; STDHR [1/min], Standard deviation of instantaneous heart rate values; RMSSD [ms], root mean square of differences; pNN50, proportion of differences between adjacent NN intervals of more than 50ms; NN50 [beats], number of NN intervals that differ more than 50 ms; HRV triangular index [-], The integral of the RR interval histogram divided by the height of the histogram; TINN [ms], Baseline width of the RR interval histogram; LF (ms2), low frequency; HF (ms2), high frequency; LF/HF [-], Ratio LF [ms2]/HF [ms2]; SD1 [ms], represents the dispersion of the points along the line of identity and is thought to be an index of the instantaneous beat-to-beat variability of the data; SD2 [ms], represents the dispersion of the points along the line of identity and is thought to represent the slow variability of heart rate; SampIEn [-], complexity of NN series; ApEn, approximate entropy of the complexity or irregularity of the signal; DFA, Detrended fluctuation analysis of the correlation within the HRV signal divided into short-term and long-term fluctuations; Lmin (beats), mean line length; Lmax (beats), maximum line length; REC [%], recurrence rate; DET [%], determinism; ShanEn [-], shannon entropy consider the lengths of the diagonal lines; D2 [-], correlation dimension.\n\n\nDiscussion\n\nThe aim of the current study was to assess the validity of Polar RS800CX in a large spectrum of HRV parameters by comparing it with a 12-lead ECG monitor during rest, moderate cycling and recovery. We found that all the HRV indices (n=24) based on Polar RS800CX are correlated, while only five HRV indices displayed mean differences with the ECG HRV indices during the resting period. This confirms recent data showing that Polar RS800CX is valid for resting HRV measurements19–21,27,28. However, we confirmed for the first time that this is the case for the wide spectrum of HRV indices. Also, the majority of the previous studies showed that other Polar HR monitors (i.e. S810, 810s, S810i, V800) showed good agreement with ECG in detecting RR intervals in supine and standing positions of adult individuals5,18,23, and children24,29. One previous study nevertheless, showed a bias of the Polar S810 in detecting HRV indices in supine and standing positions during rest30.\n\nDuring the exercise period, Polar RS800CX displayed a disagreement with the ECG in HRV indices. The error rate of Mean RR intervals obtained from Polar RS800CX and the ECG in our study was 28.1%. This is significantly higher than the error rate (0.71%) of RR intervals in a previous similar study examined the validity of Polar V800 during exercise31. Also, a previous study showed similar bias of Polar S810 HR monitor during exercise in high frequency (HF) HRV index at intensities >60% of VO2 max and low frequency (LF) HRV index at intensities 80–100% of VO2 max, even though the HR monitor found relatively valid in exercise intensities <60% of VO2 max29. Subsequent studies showed the Polar S810 had no bias during exercise18 and Polar V800 during high endurance running31. Reasons that Polar HR monitors may display bias in measuring HRV indices during exercise include: a) the insecure placement of the elastic band on the thorax; b) the movement of the ECG electrodes during exercise, given that our participants were healthy and they showed no arrhythmia; c) the corresponding transmission of the data from the HR monitors; and d) changes of R-wave detection and the peak detection algorithms used29,32–35.\n\nRegarding the recovery period, the existing evidence is rather scarce. In our study, we found that the Polar RS800CX displayed bias in the recovery period. For instance, the error rate of mean RR intervals obtained from the Polar RS800CX and the ECG, was 68.3%. Perhaps some of the reasons reported above for the displayed bias during stress/exercise conditions may also apply to the recovery stage.\n\nA limitation of the current study might be the small sample size of participants. However, a post-measurement online power calculation (DSS Research), using as an index the resting RS800CX Mean RR HRV index of the current study and the resting RS800CX Mean RR HRV index from a previous similar study19, showed 100% of statistical power (n=32) in our study. Another limitation might be that only males participated in this study and, therefore, our outcomes should be treated with caution when apply to females. However, we used a well-established statistical approach, as previously described19,21,23,31. Finally, even though an investigator continuously checked the displacement of the chest belt and the electrodes of the Polar RS800CX and the ECG respectively, there is a possibility of a displacement due to sweating during the exercise and recovery periods, as previously suggested36.\n\nWe conclude that Polar RS800CX found to be a valid tool for monitoring HRV during resting periods, but not for assessments during exercise at intensity of 60% VO2 max and recovery periods. Testing the validity of devices such as Polar monitors in stressful (hot/cold or extreme) conditions such as exercise, requires further scientific attention, given that these instruments could provide a cost-effective method for monitoring HRV.\n\n\nData availability\n\nDataset 1. Validation of Polar RS800CX for heart rate variability measurements. DOI: https://doi.org/10.5256/f1000research.16130.d21672226.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work\n\n\nSupplementary material\n\nSupplementary File 1. Bland–Altman plots for resting, cycling and recovery periods.\n\nClick here to access the data.\n\n\nReferences\n\nLerman A, Zeiher AM: Endothelial function: cardiac events. Circulation. 2005; 111(3): 363–8. PubMed Abstract | Publisher Full Text\n\nAkselrod S, Gordon D, Ubel FA, et al.: Power spectrum analysis of heart rate fluctuation: a quantitative probe of beat-to-beat cardiovascular control. Science. 1981; 213(4504): 220–2. PubMed Abstract | Publisher Full Text\n\nFouad FM, Tarazi RC, Ferrario CM, et al.: Assessment of parasympathetic control of heart rate by a noninvasive method. Am J Physiol. 1984; 246(6 Pt 2): H838–42. PubMed Abstract | Publisher Full Text\n\nSeals DR, Chase PB: Influence of physical training on heart rate variability and baroreflex circulatory control. J Appl Physiol (1985). 1989; 66(4): 1886–95. PubMed Abstract | Publisher Full Text\n\nGiles D, Draper N, Neil W: Validity of the Polar V800 heart rate monitor to measure RR intervals at rest. Eur J Appl Physiol. 2016; 116(3): 563–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTsuji H, Venditti FJ Jr, Manders ES, et al.: Reduced heart rate variability and mortality risk in an elderly cohort. The Framingham Heart Study. Circulation. 1994; 90(2): 878–83. PubMed Abstract\n\nMolgaard H, Sørensen KE, Bjerregaard P: Attenuated 24-h heart rate variability in apparently healthy subjects, subsequently suffering sudden cardiac death. Clin Auton Res. 1991; 1(3): 233–7. PubMed Abstract | Publisher Full Text\n\nFriedman BH, Thayer JF: Autonomic balance revisited: panic anxiety and heart rate variability. J Psychosom Res. 1998; 44(1): 133–51. PubMed Abstract | Publisher Full Text\n\nDishman RK, Nakamura Y, Garcia ME, et al.: Heart rate variability, trait anxiety, and perceived stress among physically fit men and women. Int J Psychophysiol. 2000; 37(2): 121–33. PubMed Abstract | Publisher Full Text\n\nRohde LE, Polanczyk CA, Moraes RS, et al.: Effect of partial arrhythmia suppression with amiodarone on heart rate variability of patients with congestive heart failure. Am Heart J. 1998; 136(1): 31–6. PubMed Abstract | Publisher Full Text\n\nStein PK, Ehsani AA, Domitrovich PP, et al.: Effect of exercise training on heart rate variability in healthy older adults. Am Heart J. 1999; 138(3 Pt 1): 567–76. PubMed Abstract | Publisher Full Text\n\nMelanson EL, Freedson PS: The effect of endurance training on resting heart rate variability in sedentary adult males. Eur J Appl Physiol. 2001; 85(5): 442–9. PubMed Abstract | Publisher Full Text\n\nLevy WC, Cerqueira MD, Harp GD, et al.: Effect of endurance exercise training on heart rate variability at rest in healthy young and older men. Am J Cardiol. 1998; 82(10): 1236–41. PubMed Abstract | Publisher Full Text\n\nTulppo MP, Hautala AJ, Makikallio TH, et al.: Effects of aerobic training on heart rate dynamics in sedentary subjects. J Appl Physiol (1985). 2003; 95(1): 364–72. PubMed Abstract | Publisher Full Text\n\nMourot L, Bouhaddi M, Perrey S, et al.: Decrease in heart rate variability with overtraining: assessment by the Poincaré plot analysis. Clin Physiol Funct Imaging. 2004; 24(1): 10–8. PubMed Abstract | Publisher Full Text\n\nHedelin R, Wiklund U, Bjerle P, et al.: Cardiac autonomic imbalance in an overtrained athlete. Med Sci Sports Exerc. 2000; 32(9): 1531–3. PubMed Abstract | Publisher Full Text\n\nBenjamin EJ, Larson MG, Keyes MJ, et al.: Clinical correlates and heritability of flow-mediated dilation in the community: the Framingham Heart Study. Circulation. 2004; 109(5): 613–9. PubMed Abstract | Publisher Full Text\n\nVanderlei LC, Silva RA, Pastre CM, et al.: Comparison of the Polar S810i monitor and the ECG for the analysis of heart rate variability in the time and frequency domains. Braz J Med Biol Res. 2008; 41(10): 854–9. PubMed Abstract | Publisher Full Text\n\nWilliams DP, Jarczok MN, Ellis RJ, et al.: Two-week test-retest reliability of the Polar® RS800CX™ to record heart rate variability. Clin Physiol Funct Imaging. 2017; 37(6): 776–781. PubMed Abstract | Publisher Full Text\n\nVasconcellos FV, Seabra A, Cunha FA, et al.: Heart rate variability assessment with fingertip photoplethysmography and polar RS800cx as compared with electrocardiography in obese adolescents. Blood Press Monit. 2015; 20(6): 351–60. PubMed Abstract | Publisher Full Text\n\nEssner A, Sjöström R, Ahlgren E, et al.: Comparison of Polar® RS800CX heart rate monitor and electrocardiogram for measuring inter-beat intervals in healthy dogs. Physiol Behav. 2015; 138: 247–53. PubMed Abstract | Publisher Full Text\n\nCamarda SR, Tebexreni AS, Pafaro CN, et al.: Comparison of maximal heart rate using the prediction equations proposed by Karvonen and Tanaka. Arq Bras Cardiol. 2008; 91(5): 311–4. PubMed Abstract | Publisher Full Text\n\nGamelin FX, Berthoin S, Bosquet L: Validity of the polar S810 heart rate monitor to measure R-R intervals at rest. Med Sci Sports Exerc. 2006; 38(5): 887–93. PubMed Abstract | Publisher Full Text\n\nGamelin FX, Baquet G, Berthoin S, et al.: Validity of the polar S810 to measure R-R intervals in children. Int J Sports Med. 2008; 29(2): 134–8. PubMed Abstract | Publisher Full Text\n\nBland JM, Altman DG: Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986; 1(8476): 307–10. PubMed Abstract | Publisher Full Text\n\nTsitoglou K, Koutedakis Y, Dinas P: Dataset 1 in: Validation of polar RS800CX for assessing heart rate variability during rest, moderate cycling and post-exercise recovery. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16130.d216722\n\nBarbosa MP, da Silva NT, de Azevedo FM, et al.: Comparison of Polar® RS800G3™ heart rate monitor with Polar® S810i™ and electrocardiogram to obtain the series of RR intervals and analysis of heart rate variability at rest. Clin Physiol Funct Imaging. 2016; 36(2): 112–7. PubMed Abstract | Publisher Full Text\n\nChernozub AA: [Heart rate variability in untrained young men under different power loading modes]. Vestn Ross Akad Med Nauk. 2014; (1–2): 51–6. PubMed Abstract\n\nKingsley M, Lewis MJ, Marson RE: Comparison of Polar 810s and an ambulatory ECG system for RR interval measurement during progressive exercise. Int J Sports Med. 2005; 26(1): 39–44. PubMed Abstract | Publisher Full Text\n\nNunan D, Donovan G, Jakovljevic DG, et al.: Validity and reliability of short-term heart-rate variability from the Polar S810. Med Sci Sports Exerc. 2009; 41(1): 243–50. PubMed Abstract | Publisher Full Text\n\nCaminal P, Sola F, Gomis P, et al.: Validity of the Polar V800 monitor for measuring heart rate variability in mountain running route conditions. Eur J Appl Physiol. 2018; 118(3): 669–77. PubMed Abstract | Publisher Full Text\n\nEklund B, Kaijser L, Knutsson E: Blood flow in resting (contralateral) arm and leg during isometric contraction. J Physiol. 1974; 240(1): 111–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNunan D, Jakovljevic DG, Donovan G, et al.: Levels of agreement for RR intervals and short-term heart rate variability obtained from the Polar S810 and an alternative system. Eur J Appl Physiol. 2008; 103(5): 529–37. PubMed Abstract | Publisher Full Text\n\nSanders JS, Mark AL, Ferguson DW: Evidence for cholinergically mediated vasodilation at the beginning of isometric exercise in humans. Circulation. 1989; 79(4): 815–24. PubMed Abstract\n\nBhagyalakshmi A, Frangos JA: Mechanism of shear-induced prostacyclin production in endothelial cells. Biochem Biophys Res Commun. 1989; 158(1): 31–7. PubMed Abstract | Publisher Full Text\n\nMcCann K, Holdgate A, Mahammad R, et al.: Accuracy of ECG electrode placement by emergency department clinicians. Emerg Med Australas. 2007; 19(5): 442–8. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "45213",
"date": "19 Mar 2019",
"name": "Alberto Hernando",
"expertise": [
"Reviewer Expertise HRV analysis"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article provides the validation of the Polar RS800CX in rest and during an exercise test when compared with a reference Gold-Standard ECG. To do that, a full database is recorded and several times, frequency and entropy indices are compared. Similarities between Polar and ECG measures are found in the rest stage, but not during exercise and recovery phases. Therefore, the use of Polar RS800CX is not fully recommended in exercise tests.\nI found the paper interesting but I have some comments that should be discussed in order to improve the understanding of the entire work.\n\nIntroduction:\nA more detailed introduction is needed, especially in two different areas: HRV (why is it so useful, which parameters can be extracted from it and how, what is the meaning of these parameters, has the respiration got any implication in HRV analysis?) and in the conditions that HRV could be applied in sport analysis (is it easy to point out the QRS complex in a noisy ECG? What happened with the non-stationary nature of HRV during exercise? How could the frequency analysis be affected with the high values of respiratory rate? Is there any component associated with the cadence/rhythm?). These details must be mentioned to put in context all the peculiarities of HRV analysis in sport.\nMethods:\nVolunteers and experimental protocol: what is the sampling frequency of the ECG? Is the cycling cadence controlled? Has the respiratory rate been registered and analysed somehow?\n\nAnalysis of heart rate variability: does QRS detection work fine? Was any artefact detection and correction applied? An explanation of the different indices (their meaning, how they are computed) must be included, especially for frequency indices (the method used, the frequency limits, and is the respiratory rate considered in HF?). Also, a plot with HRV extracted from the Polar and HRV extracted with the ECG may help to understand the paper.\n\nStatistical analysis: Due to the non-normal distribution of the results, I think the Spearman correlation test for correlation and Wilcoxon test for differences between polar and ECG are more appropriate than the Pearson and paired-sample t-test. Is the error rate presented in any Table?\nResults:\nFor the 3 Tables, if the distribution is non-normal, median and IQR are more suitable than mean and std. Why does the correlation limit change from one Table to another? If r>0.75 is the value marked, it must be maintained in all the article. By the way, why 0.75 and no other value? If the correlation appears in all the values of the Table, maybe it is better to not mark with * in all the cells and only say this in the Table explanation. The number of subjects is another parameter that maybe would be better to say at the beginning and not put in all the cells.\n\nThe Tables are too big, so a little simplification may help to understand the final purpose of the article. For example, mean RR-STDRR and mean Hr-STDHR are not too redundant? Maybe with only one pair is enough. Also if the indices are defined in the methods section the Table caption is reduced considerably.\nDiscussion:\nThis part is based in the comparison between ECG and Polar, but if you have analysed 24 parameters, I would explain something more about them (differences between stages, why to trust or not to trust in one/some parameter(s)…).\n\nPossible explanations of the differences in ECG and Polar, reasons ‘a’ and ‘b’ are not supposed to be controlled by the investigator?\n\nTo finish, more details about possible practical applications could be given too.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "41691",
"date": "24 Jun 2019",
"name": "Rita Khadka",
"expertise": [
"Reviewer Expertise Assessment of heart rate variability",
"Blood presure variability",
"Spontaneous baroreflex sensitivity",
"cardiovascular autonomic function test (cardiovascular autonomic reactivity test)",
"and EEG in heath and disease conditions. Cardiovascular exercise physiology",
"yoga",
"and high altitude physiology."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe present study aimed to validate RS800CX for assessing HRV during rest, moderate cycling and post exercise recovery by comparing its data with the data obtained from a gold standard ECG device. The study has been conducted very well. The acquisition of ECG signals for HRV analysis in all three conditions has been done simultaneously from both the devices. Statistical tests used to analyze the data are appropriate. The study has found that Polar RS800CX is a valid tool for monitoring HRV in individuals at resting conditions, but it displays inconsistency when used during exercise at 60% of maximal heart rate and recovery periods.\n\nI found the paper interesting, however, I have some comments that need to be discussed for the improvement of the work.\n\nIntroduction:\nThe introduction is well, however, there is a comment for the use of HRV as a diagnostic tool written on pages (in abstract: page 1, line 3; in introduction: page 3, para 1, line 7 & last line). The HRV can be taken as a prognostic tool rather than a diagnostic tool. It is one of the independent predictors of sudden cardiac death. However, it is not a very specific test for diagnosis of a disease.\n\nIt is better to write down \"autonomic nervous system\" rather than \"autonomous nervous system\" (page 3, para 1, line 5).\n\nMethods:\n\nThe description about volunteers and the experimental protocol are well written. However, the methods applied for the acquisition of ECG signals and processing of ECG signals for HRV analysis are not written.\nWhat were the sampling frequency, low pass filter or band pass filter and gain for ECG signal acquisition in both the devices? Whether all ECG signals were checked for errors and edited for errors or discarded the data are not mentioned. These major points are missing.\nAt times, peaks are not detected by the algorithm used in the devices during exercise and recovery period, because, features of ECG are not very regular. These signals need to be properly checked manually for missing peaks or artifacts detected as peaks and if required need to be edited properly following the standard rule of editing signals for HRV. In Polar devices RR intervals need to be checked for errors, i.e. very short or very long RR intervals in comparison to the average RR intervals. These are not mentioned in the methods section in the present study. This is important for HRV analysis.\nIt would be better to record baseline respiratory rate also in such a type of study.\n\nResults:\n\nResults written in the texts are well, however, there are comments for the tables:\nThe titles of all three tables are not self-explanatory. It would be better to rewrite them.\n\nIt would be better to write down the number of the cases in the titles of the tables rather than in the legend of the tables.\n\nThe tables are long. It would be better to remove \"Mean HR and STDHR\". These parameters are not very important for HRV interpretation.\n\nIt would be better to write down the full form of NN50 before pNN50 in the table legends.\n\nThe level of significance has not been mentioned in all three table legends. It is must be mentioned.\n\nThere is a comment for SD1 representation given in the legends of Tables 1, 2 and 3. The SD1 represents the dispersion of the points perpendicular to the line of identity rather than the dispersion of the points along the line of identity. Please check for it.\n\nIn the source data set supplied, I found markedly reduced Mean RR intervals during recovery period by Polar RS800CX. Similarly, markedly reduced Mean RR intervals are found in several subjects during exercise period by ECG device. These RR intervals need to be rechecked manually for the errors.\n\nDiscussion:\nIn the present study, it has been discussed well that the majority of the previous studies showed that other Polar HR monitors (i.e. S810, 810s, S810i, V800) showed good agreement with ECG in detecting RR intervals in supine and standing positions of adult individuals and children. Also, during exercise periods Polar S810 and Polar V800 showed low biases and good agreements with ECG devices.\n\nThe present study showed disagreement of Polar RS800CX with the ECG in HRV indices. The error rate of Mean RR intervals obtained from Polar RS800CX and the ECG was 28.1%. However, the reasons have not been discussed. Also, in the methods, sampling frequency and gain for both the devices are not mentioned. ECG signal processing for HRV analysis is also not mentioned. These parts of the methods are very important and must be rechecked before interpretation of results and drawing conclusions.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1501
|
https://f1000research.com/articles/7-1005/v1
|
04 Jul 18
|
{
"type": "Research Article",
"title": "Discovery and description of the first human Retro-Giant virus",
"authors": [
"Elena Angela Lusi",
"Federico Caicci",
"Federico Caicci"
],
"abstract": "Background: Robert Gallo reported the first human retrovirus HLTV in 1980. What we report here is the first human giant virus, Mimivirus-like, with a retroviral core. Methods: The isolation of human giant viruses from human T cells Leukaemia was performed on 25% sucrose gradient. The purified viral pellet was examined using electron microscopy (EM), after immunolabelling with anti-FeLV gag p27 moAb, used for its ability to bind conserved epitopes among different mammalian retroviruses. RNA extracted from the viral particles was amplified with the Pan Retrovirus PCR technique that targets the most conserved VLPQG and YMDD in the Pol region of different retroviruses. The amplified genes were sequenced and analyzed with molecular phylogenetic tests. Results: EM showed the presence of ~400 nm giant viruses, mimivirus-like, specifically labelled by anti-FeLV gag p27 Ab. RNA extracted from the particles contained retroviral genes. Molecular phylogenetic analyses of 150 bp amplicon product, compared with the same size amplicons of the Pol gene of diverse retroviruses, showed that the retro-giant viruses are a distinct branch, missing from the current classification of retroviruses. Conclusions: Although sharing some of the morphological features with Mimiviruses, this human giant virus differs substantially from environmental DNA-giant viruses isolated so far, in that it manifests a unique mammalian transforming retroviral core and T cell tropism. The virus should not be confused with a classic human retrovirus nor even a large human retrovirus, but an ancestral human giant virus, mimivirus-like, with a mammalian retroviral core. Certainly, the oncogenic potential of the viral particle and its T cell tropism is of concern and further studies are needed to clarify the role of this giant virus in human diseases and evolution of archetypal retroviruses.",
"keywords": [
"Retroviruses",
"Mimiviruses",
"Giant Viruses",
"anti-FeLV gag",
"human T cell Leukaemia",
"Retro-Giant Virus"
],
"content": "Introduction\n\nOur previous paper described the presence of unusual Mimiviruses-like structures in human tissues1. Like Mimiviruses (~450 nm giant viruses found in the amoebas), these human structures had the ability to retain Gram staining, and mass spectrometry revealed the presence of histone peptides that had the same footprints as giant viruses2–9. However, the human giant virus-like structures displayed a distinct and unique mammalian retroviral antigenicity.\n\nOur initial discovery in human tissues presented the conundrum of whether the structures were giant viruses with a retroviral nature or cellular components having a viral footprint. The distinction between the virus and the cells was blurred. The most difficult part to explain arose from the unique mammalian retroviral antigenicity associated with the human Mimivirus-like structures.\n\nThere was only one possibility to solve the dilemma: isolate the viruses (if really present) and verify if they contain genetic material. Consequently, in the present study we chose the traditional way of isolating virus using a sucrose gradient, following the same protocols and steps described by Prof Robert Gallo in his discovery of the first human retrovirus, human T-lymphotropic virus (HTLV)10.\n\nThe gigantic dimension of our viral particle excluded it from the orthodox understanding of retroviruses but simultaneously presented the antipodean challenge to establish if this giant virus’s retroviral properties signified the discovery of the first human Retro-Giant virus.\n\nIn this manuscript, we report the isolation of a human giant virus, mimivirus-like, with a retroviral core from human T cell acute lymphoblastic leukaemia, and the various experiments on the purified giant virus, including immunogold electron microscopy, nucleic acid extraction, reverse transcriptase assay, genetic sequencing and phylogenetic analyses.\n\nThe experiments, validated by independent operators kept blind, determined the retroviral nature of a human giant virus with associated viral factory that is ancestral to archetypal retroviruses.\n\n\nMethods\n\n108 human T cell leukaemia (HPB-ALL, DSMZ, Germany), grown at 37°C in RPMI-1640, 10% fetal bovine serum and 2 mM L-glutamine, were centrifuged at 1,500 rpm g for 5 minutes at 4°C. The cell pellet was washed with 1x PBS. The cells were lysed (vortexed) with 2.5 ml PBS in the presence of 25μl of protease inhibitor cocktail (Abmgood, Richmond BC, Canada). Cell suspension was vortexed and incubated a 4°C for 30 minutes. Cell lysis was monitored using a phase contrast light microscope. The resulting crude extract was centrifuged at 3,000 rpm for 5 minutes. The pellet containing the cellular nuclei was discharged.\n\nThe resulting supernatant was collected and slowly dripped over 9 ml of a 35-30-25% sucrose gradient (Sigma, Milan, Italy) and centrifuged at 10,000 rpm for 5 h in a 15 ml Corex glass centrifuge tubes (Fisher Scientific, Dublin, Ireland). Once a visible white disk, corresponding to 25% sucrose fraction, was observed, the viral pellet was collected after centrifugation at 14,000 rpm for 30 min, at 4°C.\n\nThe viral pellet was lysed with 1ml of RNA-XPress Reagent (Himedia, Mumbai, India), a monophasic solution of phenol-guanidine thiocyanate, and incubated at room temperature (RT) for 5 minutes. This was followed by the addition of 200 µL chloroform, vortexing for 15 sec and incubation at RT for 10 min. The organic and aqueous phases were separated by centrifuging the sample at 11,000 rpm for 15 minutes at 4°C. The aqueous phase, containing RNA, was harvested and precipitated with 600 μl of isopropyl alcohol and glycogen. After incubation for 1h at 20°C, RNA was pelleted by centrifugation at 11,000 rpm for 10 minutes. The RNA pellet was washed with 75% of ethanol, air dried and resuspended in RNase free H2O (Himedia). One aliquot was utilized for concentration determination in a MaestroNano Spectrophotometer (Maestrogen Inc, Hsinchu City, Taiwan).\n\n1 μg of total RNA was utilized for cDNA synthesis using EasyScript cDNA Synthesis Kit (Abmgood) according to the manufacturer’s instructions. Briefly, 20 μl of reaction contained 200 units of reverse transcriptase, 0.5 μM of random primers, 20 units of ribonuclease inhibitors, 500 μM dNTP. The reaction was carried out at 25°C for 10 min, then at 42°C for 50 min.\n\nWe performed a Pan-retrovirus PCR from the RNA extracted from the giant viruses. To amplify a segment of the Pol gene, we used degenerate primers targeting a conserved region, of approximately 140bp, between the most conserved domain VLPQG and YMDD in the Pol gene of retroviruses. The oligonucleotide primers and conditions were derived from those described by Tuke et al.11. The first PCR mixture was performed by amplifying 1 μl of the double-stranded cDNA reaction with the following reagents: 1 μM primer PAN-UO (5’-CTT GGATCCTGGAAAGTGCTAAGCCCAC-3’) and 1μΜ primer PAN-D1 (5’-CTCAAGCTTCAG CGATGGTCATCCATCGTA-3’) with 1.25 unit of thermostable DNA polymerase (Precision DNA Polymerase, Abmgood). The above mixture was brought to a final volume of 25μl with a PCR mix (Abmgood, Richmond, BC, Canada) containing 0.2 mM dNTPs/2.0mM MgCl2 in 1X PCR reaction buffer. The PCR was performed in a Thermal Cycler (GET3X Triple Block Thermal Cycler, Bio-Gener, China) using the following conditions: 1 cycle of 95°C for 10 minutes; 35 cycles of 95°C for 1 minute, 34°C for 1 minute, 72°C for 1 minute; 1 cycle of 72°C for 10 minutes.\n\nIn total, 1 μl of this reaction was re-amplified in a semi-nested reaction using the PAN-UI (5’ CTTGGATCCAGTGTCTAGCCCACAAGGG-3’) primer in combination with PAN-D1. Conditions for the semi-nested PCR were: 1 cycle of 10 minutes at 95°C; 40 cycles of 95°C for 1 minute, 45°C for 30 seconds, 72°C for 1 minute; 1 cycle of 72°C for 10 minutes.\n\nA 10-μl aliquot of the resulting PCR product was analyzed after electrophoresis on a 2.5% MS8 agarose gel (Laboratorios Conda, Madrid, Spain). The amplified bands were recovered from the gel with UltraPrep Agarose Gel Extraction Kit (AHN Biotechnologie GmbH, Nordhausen, Germany) according to the manufacturer’s instructions. Briefly, the DNA was excised from the agarose gel and weighted. Three volumes of buffer (volume: weight of the excised gel band size) was added and the mixture was incubated at 50°C for 10 minutes. The DNA was bound to a column and centrifuged at 13,000 rpm for 1 minute. After awash with 700 µl of washing buffer, the DNA was recovered from the column with 50 µl of elution buffer.\n\nDNA sequencing was performed on an ABI 3500 Automatic Sequencer (Applied Biosystems, Foster City, CA, USA) using Big Dye Terminator v3.1 (Applied Biosystems).\n\nMolecular phylogenetic analyseswere made at BMR Genomics Institute (Padua, Italy). Our sequences were aligned against other retroviral viral sequences. Sequence accession numbers used in the alignment between 150 bp segment from retro- giant viruses with equivalent VPLP—YMDD Polregion (RT) of different retroviruses, amplified with the same Pan Retrovirus-PCR, are reported in Dataset 112. For the phylogenetic analysis of the 400 bp amplicon, retroviral sequences and accession numbers are displayed in Dataset 213.\n\nPhylogenetic tree for the 150bp VLPQ-YMDD interval was made using Phylogeny.fr (A La Carte Mode). T-Coffee was used for multiple alignment, Gblocks v 0.91b for alignment curation, PhyML 3.1 for phylogeny and TreeDyn 198.3 for tree drawing. A non-parametric, Shimodaira-Hasegawa-like approximate Likelihood-Ratio branch test (SH-like aLTR) was used as a statistical test.\n\nFor the 400 bp amplicon, phylogenetic tree was made using Phylogeny.fr. Muscle v3.8.31 was used for multiple alignment, Gblocks v 0.91b for alignment curation, PhyML 3.1 for phylogeny and TreeDyn 198.3for tree drawing. A non-parametric, Shimodaira-Hasegawa-like approximate Likelihood-Ratio branch test (SH-like aLTR), default HKY85, was used as a statistical test.\n\n25 μl of the 25% sucrose isolated viral pellet was placed on Holey Carbon film on Nickel 400 mesh. The grids were treated for 30 minutes at room temperature with the primary monoclonal antibody (moAb) anti-Feline Laeukemia Virus p27gag (catalog number, PF12J-10A; Custom Monoclonals International, West Sacramento, CA, USA) and subsequently with a secondary anti-mouse gold conjugated antibody (BB international anti-mouse IgG 15 nm gold conjugate; catalog number, EM.GMHL15, Batch 4838). After staining with 1% uranyl acetate, the sample was observed with a Tecnai G2 (FEI) (Thermo Fisher) transmission electron microscope, operating at 100 kV. Images were captured with a Veleta (Olympus Soft Imaging System) digital camera.\n\nGram positive staining of purified human giant viruses was performed with Colour Gram 2 Biomerieux kit, following the manufactures instructions. Before staining, slides were heated fixed 3–4 times through the Bunsen burner flame.\n\n\nReverse transcriptase (RT) assay of the human giant viruses\n\nAfter sucrose gradient isolation, the viral pellet was lysed in 20 μl of 20 mM Tris-HCL pH7.5, 100 mM NaCl, 0.1 mM EDTA, 1mM DTT, 50% (v/v) glicerol, 0.25% Triton X-100 (Sigma). To test the ability of the human giant viruses to retro-transcribe, 10 μl of the viral lysate, instead of a reverse transcriptase enzyme, were used to retro-transcribe 1 μg of total RNA from Human Liver Total RNA (ThermoFisher Scientific, Waltham, MA, USA). The reverse transcriptase reaction for the viral pellet was carried out with random primers using a commercial kit (EasyScript cDNA Synthesis kit; Abmgood), deprived of the supplied reverse transcriptase enzyme. The reverse transcriptase reaction was carried at 25°C for 10 minutes, then at 42°C for 50 minutes. The reaction was stopped by heating at 85°C for 5 minutes. The viral reverse transcriptase activity was compared to positive controls where a commercial RT enzyme was included (EasyScript RTase; Amgood).\n\nAfter the reverse transcription, 2 μl of the obtained single stranded cDNA was further amplified in presence of 10 pmol of primers for GAPDH, 1.25 units of thermostable DNA polymerase (Precision DNA Polymerase; Abmgood), 0.2 mM dNTPs/ 2.0 mM MgCl2 in 1X PCR buffer in a final volume of 25μl. PCR conditions were: 1 cycle of 95°C for 5 minutes; 40 cycles of 94°C for 1 minute, 58°C for 1 minute, 72°C for 1 minute; 1 cycle of 72°C for 5 minutes. 20 μl of the PCR reaction was loaded on a 1% agarose gel for electrophoresis.\n\n\nResults\n\nGiant viral particles, isolated from human T cell leukemia (HPB-ALL) cells, formed a white ring on 25% of sucrose gradient. Only the 25% fraction was collected. This fraction was pure and did not contain any contamination such as cellular nuclei; the nuclear fraction was discharged in the first step of differential centrifugation, before layering onto the sucrose gradient.\n\nEM immunogold of the viral pellet depicted giant viral particles (~400nm) that were specifically marked by an anti-Feline Leukaemia virus core p27 gag moAb (Figure 1A). The purified human giant viruses retained the Gram stain, like Mimiviruses in amoebas (Figure 1B).\n\n(A) Electron microscopy immunogold shows a ~400 nm giant virus isolated from human T cell leukemia marked with anti-FeLV p27gag moAb (picture representative of 100 repeats). (B) The same viral pellet during Gram staining shows blue granules that diagnose giant viruses (red arrows indicate some of these, but blue granules can be seen all over the slide). Mimiviruses (giant viruses) were first discovered in the amoebas. The amoebas had Gram positive granules that proved to not be bacteria but giant viruses, mimicking microbes. In the previous manuscript1, we showed the presence in human cells of Gram positive giant viral particles associated with viral factories, both sharing the retroviral antigenicity. The viral factories are located inside the cells. What we are presenting here are giant viral particles isolated from human T cell acute lymphoblastic leukaemia by sucrose gradient. This human giant virus differs from amoebas’s giant viruses in that it displays the properties of classical retroviruses.\n\nThis result confirms our previously published findings where the same anti-Feline Leukaemia virus p27gag moAb specifically marked the giant particles as well as the associated viral factories inside the human cells1.\n\nThe human giant particles contained retroviral RNA. Identification of the retroviral sequences, extracted from the isolated giant viral particles, was accomplished by PCR with degenerate primers targeting a mostly highly conserved sequence in the reverse transcriptase gene of retroviruses, between two conserved domains VLPQG and YMDD. This amplification approach with degenerate primers was initially described by Tuke et al. and it is called Pan-retrovirus PCR11. This PCR system has the ability to detect a ~140 bp amplicon of the Pol gene across many different retroviruses. HIV-1, HTLV-1, Simian D type virus Mason Pfizer monkey virus, Moloney murine leukaemia virus, HERV-W, ERV9 and unknown lymphoma associated retroviruses have been successfully detected with this approach14–16. The principles of the technique and the primers are illustrated in Figure 2. We performed the Pan–retrovirus -PCR experiments exclusively on sucrose gradient purified giant viruses that were first examined using EM immunogold.\n\nThe technique uses degenerate primers capable of amplifying a region in Reverse Transcriptase, between two conserved motifs VLPQ and YMDD in the Pol gene across different retroviruses. The Pol sequence amplified from the human giant viruses is indicated as RGV (bold red). Corresponding same size region of different retroviruses, amplified with the same technique11,14, are reported.\n\nA predominant band of the expected size of >150 bp was amplified from RNA extracted from the human giant viruses (Figure 3, lane 1). Multiple alignments with equivalent and already established Pol region of retroviruses, amplified by the same technique, confirm that our 150 bp amplicon is a Pol-like gene. A molecular phylogenetic analysis based on this region suggests that this amplicon (indicated as RGV) belongs to a distinct evolutionary branch among the whole retroviral families (Figure 4).\n\nA ~400 bp band and a >150bp band were amplified, lane 1 on the agarose gel. Lane 2 is the marker (100-200-300-400-500-600-700-800-900-1000-1500bp). Multiple alignments of our 150bp band (RGV bold red) with equivalent and already established Pol region of retroviruses, amplified by the same technique, confirm that our 150 bp amplicon is a Pol-like gene.\n\nThe Retro-Giant Virus (RGV) >150 amplicon (red circle) was analyzed and compared with the same conserved region of other retroviruses. See the Methods section for information on phylogenetic analysis. The RGV amplicon (green box) appears as a new, distinct, ancestral branch.\n\nAlong with the 150 bp band, a ~400bp amplicon was also detected (Figure 3, lane 1). Multiple alignment and phylogenetic analysis showed that the 400 bp band aligns entirely on the human chromosome 7 and clusters with human endogenous retroviruses (HERVs) genes (Figure 5). This finding replicates consolidated reports of almost intact human endogenous retrovirus genomes in chromosome 717–24. Additional information is in Dataset 1–Dataset 3.\n\nThe ~400bp amplicon aligns entirely on a fragment of the human chromosome 7 and clusters with HERVs. This finding replicates established data of HERVs mapping in the human chromosome 7. See the Methods section and Dataset 2 for information on phylogenetic analysis.\n\nPhylogenetic tree was made with webserver http://www.phylogeny.fr. Musclev3.8.31 was used for multiple alignment, Gblocks for alignment curation, PhyML for phylogeny and TreeDyn for treedrawing. A non-parametric, Shimodaira-Hasegawa-like approximate Likelihood-Ratio branch test (SH-like aLTR) was used as statistic test.\n\nThe retro-giant viruses has reverse transcriptase activity. 10 μl the lysated viral pellet produced cDNA from an RNA template (Figure 6).\n\n(A) RT reaction and synthesis of ss c-DNA: Lane 1, reaction with a commercial RT enzyme; Lane 2, reaction with viral pellet (reaction without RT enzyme); Lane 3, GeneRuler 1Kb DNA ladder; Lane 4, 100bp DNA ladder. (B) GAPDH amplification from ss-cDNA template: Lanes 1 and 2, reaction with commercial RT enzyme; Lanes 3 and 4, reaction with the lysated viral pellet; Lane 5, negative control; Lane 6, DNA ladder; Lane 7, additional negative control.\n\nSummary of results (Figure 7 and Figure 8)\n\n1. The fraction extracted from human T cell leukaemia cells and purified through 25% sucrose gradient are human giant viruses with a retroviral core.\n\n2. They have ~400 nm dimension as shown using EM. The anti FeLV p27 gag antibody labelled the giant viral particles.\n\n3. Pan-retrovirus PCR and molecular phylogenesis confirm the presence in the viral particles of retroviral genes.\n\n4. The Retro-Giant viruses have reverse transcriptase activity.\n\n5. Like giant mimiviruses in the amoebas, the human giant viruses retain Gram staining and they are associated with viral factories, but the substantial difference is their T tropism and the retroviral core: they are human Retro-Giant viruses (RGV), missing from the current retroviral classification.\n\nGiant viruses were isolated from human leukaemia T cells on 25% sucrose gradient (sedimentation fraction of giant viruses in general). Cell nuclei was discharged before layering onto the sucrose. The isolated viral pellet was examined using EM immunogold, which confirmed the presence of ~400 nm giant viruses with retroviral antigens (anti-FeLV gag). The viral pellet was also stained with the Gram stain. The viral lysate had reverse transcriptase activity. A Pan retroviral PCR of the RNA extracted from the giant viral particles amplified the VLPQ--YMDD region of RT gene. Molecular phylogeny suggests that the Retro-Giant viruses are rare and a new distinct ancestral missing from the current classification of retroviruses.\n\nThese pictures are representative of 100 micrographs. Contact the corresponding author to inspect the entire collection. These human giant viruses have a retroviral antigenicity (positive immunogold with moAbs anti-FeLV retroviral antigens, black dots in the picture), reverse transcriptase activity and amplified segment of the Reverse Transcriptase (Pol gene). The human Retro-Giant viruses retain the Gram stain and inside the cells they are associated to their viral factories, also displaying the retroviral antigenicity1.\n\n\nDiscussion\n\nRobert Gallo reported the first human retrovirus HLTV in 1980. What we report here is the discovery of the first Mimivirus-sized human giant virus with a retroviral core.\n\nIn our previous work, conducted initially on human tissues with anti-FeLV gag p27 moAb, EM depicted previously unreported ~400 nm gigantic particles associated with large aggregates, resembling viroplasms, recognized by anti-FeLVp27 gag Ab1. The particle diameters were more than four times the 100 nm size expected in retroviruses. These large particles and associated structures discovered in human cells appeared to morphologically parallel previously reported amoebas Mimiviruses (giant viruses and their viral factories)7.\n\nGram positive blue granules that disclosed the existence of giant viruses in the amoeba, similarly detected this newly giant virus in human cells both in our previous study1 and current study.\n\nProteomic analyses suggested the presence of histone H4 variants common to environmental giant DNA viruses, but the striking difference was the unique mammalian retroviral nature of the human giant particles.\n\nHowever, working on human tissues was confusing and the distinction between the virus and the cells was blurred. How to prove if we were really facing ancestral giant viruses with a retroviral core?\n\nIn order to distinguish the giant agent from the human cells, in the present study we isolated the viruses, examined their morphology using EM, extracted their nucleic acid and performed a Pan-retrovirus PCR and Sanger sequencing.\n\nThe presence of human Retro-Giant viruses was confirmed step by step. A white ring sedimented on a 25% of sucrose gradient - the same sedimentation fraction of the giant DNA viruses isolated from the amoebas7. EM depicted ~400nm giant viral particles that showed the ability to retain the Gram stain, but the striking difference was their unique mammalian retroviral nature. Distinct from the amoebas’ Mimiviruses, the viral particles were immuno-labelled with anti FeLV p27gag moAb and they contained retroviral RNA.\n\nRNA extracted exclusively from the viral particles, isolated on a sucrose gradient, was amplified with a Pan-retrovirus PCR technique able to detect a conserved fragment in the Reverse transcriptase across different retroviral genera. To avoid any other source of contamination, we made sure that the cells’ nuclei were removed before layering on the sucrose gradient. DNA sequencing confirmed the presence in the giant viruses of retroviral genes. A Pol-like region, spanning the most conserved domains VLPQG and YMDD of the Reverse Transcriptase, was detected. In addition, the isolated human giant viruses showed the Reverse transcriptase activity.\n\nAnother amplicon, aligning entirely on the chromosome 7, clustered with HERVs genes. This finding replicates consolidated evidence of chromosomal assignment and expression of full-length human endogenous retroviruses found on the chromosome 717–24. Our results also confirm that these human Retro-Giant viruses have a T tropism after their isolation from human T cell leukemia. This raises some implications about their possible oncogenic role.\n\nThe T tropism of the Retro-Giant viruses relies on their retroviral nature, however, it is very improbable, as recently described25, to find DNA mimiviruses in human T lymphocytes. Nevertheless, the discovery of the retro-giant viruses was made, not only because of their ability to bind the anti-FeLV antibodies, but also for fundamental elements that we took from the discovery of the amoebas giant mimiviruses in 20032. How could we conceive the possibility of colouring the retro-giant viruses with the Gram staining without the previous discovery of mimiviruses in the amoeba? Our 400 nm particles would have been erroneously perceived as giant vesicles and not as gram positive giant viruses. With the Retro-Giants, the concept of giant virus is applied for the first time to the dogmas of retrovirology.\n\nThe Retro-Giant viruses represent a unique viral entity that suggests that defective retroviruses were possibly not sufficient for replication and required the interchange of genetic information with giant viruses’ large biosynthetic assortment.\n\nIt follows by viewing our human Retro-Giant virus as a system that evolved from ancestral viruses to surround and shuttle retroviruses, providing a wider pathway for their dissemination. The `viral factories` and viral histone H4, described in our previous study1, suggests a protected system that hijacks host immunity and epigenetics to enhance viral replication.\n\nThe fact that the Retro-Giants can be detected with an anti-FeLV gag is simply an amusing co-incidence that must be addressed with additional proteomic and genetic analyses. It might be that prototypical leukaemia viruses were the first organisms to put these fragments of evolving protein machinery together to make something useful shared among ancient retroviruses. Feline retroviruses share conserved ancestral epitopes among different mammalian retroviruses26–31. In addition, the presence of a shared 5'-leader sequence in ancestral human and mammalian retroviruses and its transduction into Feline Leukemia virus has been recently documented32. In conclusion, we report not an archetypal human retrovirus nor even a large human retrovirus, but a human giant virus, Mimivirus-like, with an ancestral mammalian retroviral core. Although sharing some morphological features with Mimiviruses (i.e. gigantic size, the ability to retain Gram staining and viral factories), this human Retro-giant virus differs substantially from the DNA-amoebal giant viruses for its unique presence of mammalian retroviral genes (gag-pol). For this discovery we chose traditional techniques adopted by other microbe hunters during their viral discoveries. However, a whole genome shotgun sequence, a full reconstruction of the viral genome and a robust phylogenetic analysis are absolutely required to establish the complete structure and the evolutionary age of the Retro-Giant viruses.\n\n\nConclusion\n\nThe unusual features of the Retro-Giant viruses challenge our current concepts of retrovirology and the Retro-Giants will not have an easy life. It is difficult to accept the concept of viruses being giant, but it becomes almost unbearable when the giants are Retro-Giants.\n\n“What? A giant virus, mimivirus-like, with a retroviral core? If they are so gigantic, why has nobody seen them before?” How to accept the provocative idea that the Retro-Giant viruses, could be ancestral creatures evolved earlier than archetypal retroviruses, as suggested by our preliminary phylogeny analysis on the most conserved VLPQG –YMDD region of the Reverse Transcriptase? These kind of questions reveal how complex scientific processes shape contemporary medical discoveries and their reception.\n\nThe giant mimiviruses in the amoeba are prehistorical creatures, evolved millions of years ago, since the dawn of evolution of eukaryotic cells33,34. They are gigantic, yet nobody saw them until 20032.\n\nFor the discovery of the Retro-Giant viruses, their retroviral nature, their ability to bind a screen of antibodies anti-Feline retroviruses and some of the biochemical properties of giant viruses proved to be lucky.\n\nNot archetypal retroviruses, but Gram positive ancestral giant viruses, Mimivirus-like, with associated viral factories and a retroviral core: this is the essence of the human Retro-Giants that were missing.\n\n\nData availability\n\nAll slides and EM grids are available to be examined; please contact the corresponding author.\n\nF1000Research: Dataset 1. 150 bp amplicon alignment against other VLPQG-YMDD Pol sequences of different retroviruses. DOI, 10.5256/f1000research.15118.d20807312\n\nF1000Research: Dataset 2. 400bp amplicon sequence and its alignment against other retroviral families. DOI, 10.5256/f1000research.15118.d20807413\n\nF1000Research: Dataset 3. Uncropped and unedited blots. DOI, 10.5256/f1000research.15118.d20807535",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported in part by St Vincent Health Care Group of Dublin, Ireland.\n\n\nAcknowledgements\n\nThe anti-FeLV-related moAbs were kindly provided as a gift by Dr Chris Grant of Custom Monoclonals International (West Sacramento, CA 95691, USA).\n\nWe thank Microgem Laboratory Research (Napoli, Italy) for their technical assistance.\n\n\nReferences\n\nLusi EA, Maloney D, Caicci F, et al.: Questions on unusual Mimivirus-like structures observed in human cells [version 1; referees: 2 approved]. F1000Res. 2017; 6: 262. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLa Scola B, Audic S, Robert C, et al.: A giant virus in amoebae. Science. 2003; 299(5615): 2033. PubMed Abstract | Publisher Full Text\n\nYamada T: Giant viruses in the environment: their origins and evolution. Curr Opin Virol. 2011; 1(1): 58–62. PubMed Abstract | Publisher Full Text\n\nVan Etten JL, Lane LC, Dunigan DD: DNA viruses: the really big ones (giruses). Annu Rev Microbiol. 2010; 64: 83–99. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClaverie JM, Ogata H, Audic S, et al.: Mimivirus and the emerging concept of \"giant\" virus. Virus Res. 2006; 117(1): 133–44. PubMed Abstract | Publisher Full Text\n\nRaoult D, Audic S, Robert C, et al.: The 1.2-megabase genome sequence of Mimivirus. Science. 2004; 306(5700): 1344–50. PubMed Abstract | Publisher Full Text\n\nCampos RK, Boratto PV, Assis FL, et al.: Samba virus: a novel mimivirus from a giant rain forest, the Brazilian Amazon. Virol J. 2014; 11: 95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThomas V, Bertelli C, Collyn F, et al.: Lausannevirus, a giant amoebal virus encoding histone doublets. Environ Microbiol. 2011; 13(6): 1454–66. PubMed Abstract | Publisher Full Text\n\nHepat R, Song JJ, Lee D, et al.: A viral histone h4 joins to eukaryotic nucleosomes and alters host gene expression. J Virol. 2013; 87(20): 11223–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPoiesz BJ, Ruscetti FW, Gazdar AF, et al.: Detection and isolation of type C retrovirus particles from fresh and cultured lymphocytes of a patient with cutaneous T-cell lymphoma. Proc Natl Acad Sci U S A. 1980; 77(12): 7415–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTuke PW, Perron H, Bedin F, et al.: Development of a pan-retrovirus detection system for multiple sclerosis studies. Acta Neurol Scand Suppl. 1997; 169: 16–21. PubMed Abstract | Publisher Full Text\n\nLusi EA, Caicci F: Dataset 1 in: Discovery and description of the first human Retro-Giant virus. F1000Research. 2018. Data Source\n\nLusi EA, Caicci F: Dataset 2 in: Discovery and description of the first human Retro-Giant virus. F1000Research. 2018. Data Source\n\nPerron H, Garson JA, Bedin F, et al.: Molecular identification of a novel retrovirus repeatedly isolated from patients with multiple sclerosis. The Collaborative Research Group on Multiple Sclerosis. Proc Natl Acad Sci U S A. 1997; 94(14): 7583. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDonehower LA, Bohannon RC, Ford RJ, et al.: The use of primers from highly conserved pol regions to identify uncharacterized retroviruses by the polymerase chain reaction. J Virol Methods. 1990; 28(1): 33–46. PubMed Abstract | Publisher Full Text\n\nShih A, Misra R, Rush MG: Detection of multiple, novel reverse transcriptase coding sequences in human nucleic acids: relation to primate retroviruses. J Virol. 1989; 63(1): 64–75. PubMed Abstract | Free Full Text\n\nReus K, Mayer J, Sauter M, et al.: Genomic organization of the human endogenous retrovirus HERV-K(HML-2.HOM) (ERVK6) on chromosome 7. Genomics. 2001; 72(3): 314–20. PubMed Abstract | Publisher Full Text\n\nMayer J, Sauter M, Rácz A, et al.: An almost-intact human endogenous retrovirus K on human chromosome 7. Nat Genet. 1999; 21(3): 257–8. PubMed Abstract | Publisher Full Text\n\nTönjes RR, Czauderna F, Kurth R: Genome-wide screening, cloning, chromosomal assignment, and expression of full-length human endogenous retrovirus type K. J Virol. 1999; 73(11): 9187–95. PubMed Abstract | Free Full Text\n\nAlliel PM, Périn JP, Goudou D, et al.: The HERV-W/7q family in the human genome. Potential for protein expression and gene regulation. Cell Mol Biol(Noisy-le-grand). 2002; 48(2): 213–7. PubMed Abstract\n\nYu H, Liu T, Zhao Z, et al.: Mutations in 3'-long terminal repeat of HERV-W family in chromosome 7 upregulate syncytin-1 expression in urothelial cell carcinoma of the bladder through interacting with c-Myb. Oncogene. 2014; 33(30): 3947–58. PubMed Abstract | Publisher Full Text\n\nMayer J, Stuhr T, Reus K, et al.: Haplotype analysis of the human endogenous retrovirus locus HERV-K(HML-2.HOM) and its evolutionary implications. J Mol Evol. 2005; 61(5): 706–15. PubMed Abstract | Publisher Full Text\n\nWeiss RA, Stove JP: Virology. Our viral inheritance. Science. 2013; 340(6134): 820–1. PubMed Abstract | Publisher Full Text\n\nBock M, Stoye JP: Endogenous retroviruses and the human germline. Curr Opin Genet Dev. 2000; 10(6): 651–5. PubMed Abstract | Publisher Full Text\n\nAbrahão J, Silva L, Oliviera D, et al.: Lack of evidence of mimivirus replication in human PBMCs. Microbes Infect. 2018; 20(5): 281–283. PubMed Abstract | Publisher Full Text\n\nDonner L, Fedele LA, Garon CF, et al.: McDonough Feline sarcoma virus: characterization of the molecularly cloned provirus and its feline oncogene (v-fms). J Virol. 1982; 41(2): 489–500. PubMed Abstract | Free Full Text\n\nSherr CJ, Fedele LA, Benveniste RE, et al.: Interspecies antigenic determinants of the reverse transcriptases and p30 proteins of mammalian type C viruses. J Virol. 1975; 15(6): 1440–8. PubMed Abstract | Free Full Text\n\nGeering G, Aoki T, Old LJ: Shared viral antigen of mammalian leukaemia viruses. Nature. 1970; 226(5242): 265–266. PubMed Abstract | Publisher Full Text\n\nIshida T, Pedersen NC, Theilen GH: Monoclonal antibodies to the v-fes product and to feline leukemia: virus P27 interspecies-specific determinants encoded by feline sarcoma viruses. Virology. 1986; 155(2): 678–87. PubMed Abstract | Publisher Full Text\n\nDavis J, Gilden RV, Oroszlan S: Multiple species-specific and interspecific antigenic determinants of a mammalian type C RNA virus internal protein. Immunochemistry. 1975; 12(1): 67–72. PubMed Abstract | Publisher Full Text\n\nWünsch M, Schulz AS, Kock W, et al.: Sequence analysis of Gardner-Arnstein feline leukaemia virus envelope gene reveals common structural properties of mammalian retroviral envelope genes. EMBO J. 1983; 2(12): 2239–2246. PubMed Abstract | Free Full Text\n\nKawasaki J, Kawamura M, Ohsato Y, et al.: Presence of a Shared 5'-Leader Sequence in Ancestral Human and Mammalian Retroviruses and Its Transduction into Feline Leukemia Virus. J Virol. 2017; 91(20): pii: e00829-17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoreira D, López-García P: Evolution of viruses and cells: do we need a fourth domain of life to explain the origin of eukaryotes? Philos Trans R Soc Lond B Sci. 2015; 370(1678): 20140327. PubMed Abstract | Publisher Full Text | Free Full Text\n\nForterre P, Gaïa M: Giant viruses and the origin of modern eukaryotes. Curr Opin Microbiol. 2016; 31: 44–9. PubMed Abstract | Publisher Full Text\n\nLusi EA, Caicci F: Dataset 3 in: Discovery and description of the first human Retro-Giant virus. F1000Research. 2018. Data Source"
}
|
[
{
"id": "35746",
"date": "13 Jul 2018",
"name": "Didier Raoult",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTo authors This work reports the description of a giant virus with cross-reactivity and genes in common with retroviruses. The authors propose to describe the first giant retrovirus obtained from human cells. This work is very preliminary, and it is unfortunate not to have the complete genome of this virus. The reviewer is very supportive of this preliminary work. However, he thinks that, at least, the RNA polymerase sequence of the giant viruses, for which there are primers that have been described for mimiviridae, would be added, which would give a much higher value to this paper.\n\nAll in all, this is a work that can attract the attention of the scientific community to an interesting emerging hypothesis.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3836",
"date": "18 Jul 2018",
"name": "Elena Angela Lusi",
"role": "Author Response",
"response": "Dear Prof Raoult,Thank you for taking the time to review my manuscript. Your suggestions are well received and greatly appreciated. I expect to have the genome sequenced in a few weeks.What I have accomplished so far in the characterization of this human Retro-Giant virus is : Isolation of the giant viral particles from human T cell leukaemia; Electron Microscopy immunogold of the giant viruses with a screen of anti-FeLV retroviruses Abs; Gram stain of the viral pellet; Amplification of the VLPQ-YMDD region of RT transcriptase gene from the RNA of the giant viruses; Reverse Transcriptase activity of the giant viral particles; all of which I believe to be of a great significance.I have also completed the proteomics analyses of the isolated viral particles. Retroviral as well as giant viruses peptides were confirmed. One of the identified retroviral peptides has oncogenic properties.I will provide you with additional data once this last round of experiments has been completed.Might these Retro-Giants be the ancestors of archetypal retroviruses? That is the question and an interesting scenario in retrovirology. Your discovery of the Giant Viruses and my finding of human Retro-Giant viruses seem to be consistent with Howard Temin’s views.Best RegardsElena Angela Lusi"
},
{
"c_id": "3961",
"date": "20 Sep 2018",
"name": "Elena Angela Lusi",
"role": "Author Response",
"response": "Dear Prof Raoult, As per your request, please find the results of the whole genome sequence attached in the revised manuscript. With this shotgun sequencing I can confirm the discovery of a human Retro-Giant in human T cells leukaemia. This human virus is not a classic retrovirus, but a Gram positive Giant Virus, Mimivirus-like, with a unique mammalian transforming retroviral core and a T cells tropism. The ORFs indicated in yellow are the retroviral core genes. The presence of leukaemogenic retroviral sequences confirms the numerous repeats of the EM immmunogold, the documented retroviral antigenicity, the preliminary PCR results and reverse transcriptase activity of the giant viral particles. The viral genome and the morphological analyses show the peculiar features of this giant microbial entity that is expression of a fascinating synthesis between archaea, prokaryotes and giant viruses. In some aspects, this human giant virus is more similar to an archaea than to a virus, even though the viral apparatus is well present. We are facing ancestral creatures with a retroviral nature. The Retro-Giant Viruses are possibly the expression of the transition from the RNA to DNA world. This discovery may even change our perspective in retrovirology and the way we conceive retroviruses. Not the basic gag–pol-env backbone and the small dimension anymore, but Gram positive giant particles with prokaryotes features and retroviral genes related to mammalian oncogenic retroviruses. Just observe and consider Temin's last diagram in his Nobel address. He shows that a normal non carcinogenic avian virus interacts with something in human cell to create Rous Sarcoma Virus. He also suspected an ancestor for the Reverse Transcriptase. I have assumed that most phylogenies are hypotheses and are based on indirect evidence. However, your discovery of Giant Viruses and my Retro-Giants could be that something I suspect Temin would think. Phylogeny is sometimes shield against the darkness, but our findings may have melted these shields. A part from the fact that the Retro Giant viruses are ancestral to retroviruses, as a medical doctor working with people, I point to their leukaemogenic role. This really matters. We could isolate constantly the Retro-Giants form acute human T cells leukaemia just with a routine sucrose gradient. Every time, in every replicate fulfilling some of the Koch’s postulates. The Retro-Giant viruses have oncogenic retroviral genes. Actions to eradicate leukaemogenic retroviruses are currently not possible, but targeting the constant presence of the ancestors may provide a new methodology. We can not ignore the presence of the Retro-Giants, their oncogenes and their presence in human leukaemia. Science seek truths and patients need a diagnosis and treatment. The concept of human giant viruses with prokaryotes features and an oncogenic retroviral core is not an easy one. Fundamental elements and concepts of your discovery of the Giant Viruses helped me in understanding that this time we were facing exceptional ancestral creatures where prototypical leukaemia viruses were the first organisms to put together fragments of an evolving protein machinery shared among ancient retroviruses, archaea and prokaryotes. The Retro-Giant viruses confirm Temin’s prediction and, despite their prokaryotes and giant viruses features, they should be included in the current classification of retroviruses because their oncogenic retroviral genes may be the ones responsible for some types of human leukaemia. Best Regards Elena Angela Lusi"
}
]
}
] | 1
|
https://f1000research.com/articles/7-1005
|
https://f1000research.com/articles/7-1366/v1
|
30 Aug 18
|
{
"type": "Software Tool Article",
"title": "DRETools: A tool-suite for differential RNA editing detection",
"authors": [
"Tyler Weirick",
"Patrick Trainor",
"Eric Rouchka",
"Andrew DeFilippis",
"Shizuka Uchida",
"Tyler Weirick",
"Patrick Trainor",
"Eric Rouchka",
"Andrew DeFilippis"
],
"abstract": "Recent tools to detect RNA editing have expanded our understanding of epitranscriptomics, linking changes in RNA editing to both disease and normal cellular processes. However, the research community currently lacks tools for determining if change in RNA editing or \"differential editing\" has occurred. To meet this need, we present DRETools, a command-line tool-set for finding differential editing among samples, editing islands, and editing sites.",
"keywords": [
"epitranscriptomics",
"RNA-seq",
"RNA editing",
"differential RNA editing",
"editing-per-kilobase",
"EPK"
],
"content": "Introduction\n\nRNA editing is a class of epitranscriptomic post-transcriptional modification found throughout metazoa consisting of the abundant conversion of adenosine-to-inosine (A-to-I) by ADARs (adenosine deaminases acting on RNA) and rare conversion of cytosine-to-uridine (C-to-U) by APOBEC (apolipoprotein B mRNA editing enzyme, catalytic polypeptide-like)1. RNA editing is particularly interesting as it is detectable as A-to-G and C-to-T mismatches to the reference genome within standard RNA-sequencing data via specialized computational pipelines2. An increasing number of studies link changes in editing at specific sites or clusters-of-sites to diseases, such as epilepsy and atherosclerosis3,4. Yet, no software for detecting differential editing is available. To meet this need, we present DRETools5: 1) to calculate units that help reduce sample-bias, similar to FPKM for RNA expression; and 2) to find differentially edited sites and editing islands (i.e., clusters of editing sites)6. Further, we showcase two examples of finding differential editing and related tasks with DREtools7\n\n\nMethods\n\nDRETools can be run via command-line by typing “dretools”, which will print the main help menu. The main help menu contains a list of operations that are available from dretools with short descriptions of each operation’s purpose. To run an operation, type dretools followed by the operation name. Further detail on each operation, including available command-line arguments and usage examples, can be found by running an operation with the --help argument. On the main help menu, operations are organized into sub-headings based on similar functions. Further detail of each sub-heading and corresponding operations can be found in the following sections.\n\nOne fundamental problem of between groups of samples is a lack of standardized units for describing editing within samples, editing islands, and sites. To this end, DRETools implements Editing Per Kilobase (EPK) based on “overall editing” (OE)8. EPK builds upon OE by considering both A-to-G and C-to-T transitions, excludes editing sites with 100% edited bases as potential mutations, and scaling by 103 for readability (similar to FPKM). EPK is calculated by dividing the total number of “edited” bases by the total number of bases overlapping known editing sites and multiplying by 103. In addition to samples, DRETools can compute EPKs for editing islands and sites. Sample-wise editing can be computed with the “sample-epk” function and can be thought of as the global-editing-rate, whereas, the EPK of islands and sites can be computed with \"region-epk\" and \"edsite-epk\" respectively, and thought of as the “local-editing-intensity”.\n\nRecently, a method was developed to find differentially edited sites between epileptic or control mouse hippocampi3. However, methods capable of comparing different tissues are also needed. The problem is that unless the global-editing-rates are similar, we cannot determine if changes are due to differing global-editing-rates or other phenomena, such as competition with N6-methyladenosine (m6A)9. Furthermore, ADARs have been described to edit both specific sites in some cases and non-specifically within small regions in other cases10. Therefore, in addition to individual editing sites, looking at the clusters of editing is also of interest. DRETools addresses both these issues by allowing the normalization of both the global-editing-rate and site or island local-editing-intensity in EPK and testing for differential editing using a linear model (LM) with the formula: \"logFeatureEPK ~ logSampleEPK + featureLength + averageReadDepth\" (features can be sites or islands), which adjusts expectations for what constitutes differential editing.\n\nDRETools also includes various helper functions. For example, the merge section contains functions to find editing islands6 and create consensus sets of editing sites by merging sites from multiple samples. Finally, the stats heading contains functions that calculate useful information about editing at the sample, gene, and site levels, such as the editable area or the number of editing sites falling in 3’/5’-untranslated regions, introns, or exons.\n\nA standard laptop computer with the latest version of R and Python3 will handle most applications.\n\n\nResults\n\nTo illustrate the utility of DRETools, we surveyed differential editing in human umbilical vein endothelial cells (HUVEC) transfected with either a siRNA against ADAR1 or against a random sequence (control)4 and the immortalized cell lines GM12787 and K56211. First we surveyed sample-wise editing using the function “sample-epk.” (Figure 1A, B). Using EPK reduces variation within groups compared to the usage of number of editing sites. For example, the coefficient of variance drops from 0.21 to 0.05 for the silenced ADAR1 group and 0.52 to 0.01 for the control group. Similarly, when comparing the immortalized cell lines, the coefficient of variance is reduced from 0.57 to 0.25 and 0.46 to 0.11, respectively (Figure 1C, D).\n\n(A) The number of editing sites in HUVEC control and silenced ADAR1 groups (p=0.77). NS, p>0.05. (B) HUVEC control and silenced ADAR1 (siADAR1) represented in EPK (p=0.7.8E-5). **p<0.0001. (C) The number of editing sites detected in GM12787 and K562 cells (p=1.2E-3). *p<0.05. (D) Editing in GM12787 and K562 cells represented in EPK (p=2.5E-6). **p<0.00011E-4. (E–H) Histograms detailing the distribution of p-values when testing for differential editing in a site- or island-wise manner. The site-wise comparison between: (E) siADAR1 and control; and (F) GM12787 and K562 cells. The island- wise comparison between: (G) siADAR1 and control; and (H) GM12787 and K562 cells.\n\nNext, we compared the EPKs of editing islands within the immortalized cell lines using “epk-region”. Using EPK to represent editing islands as opposed to the number of edited bases reduces the coefficient of variance from 0.60 ± 0.21 to 0.31 ± 0.11 (p=2E-30). Finally, we tested for differential editing using the functions “region-diff” for islands and “site-diff” for editing sites (Figure 1E–H). Comparing silenced ADAR1 to the control, the LM yielded a uniform distribution of p-values. In contrast, when using t-test applied to the same data, the distribution of p-values is shifted to the left and exhibits greater skew. However, in the immortalized cell lines, p-values calculated by the LM are more leftward skewed while p-values from the t-test became more uniformly distributed. This provides evidence that the LM can effectively reduce type I errors when testing for differential editing. For example, the LM correctly recognizes that most of the differences between the silenced ADAR1 and control groups arise from the reduction of the global-editing-rate in the silenced samples. Whereas the t-test, which does not consider the global-editing-rates, finds many differentially edited sites and islands. Conversely, when comparing the immortalized cell lines, despite the large difference in EPK, many differentially edited sites and islands are detected. While deeper biological validation is needed to be certain, these could be instances of some other phenomena, such as m6A9, affecting the editing in individual sites or islands.\n\n\nConclusions\n\nDRETools is a command-line tool suite for finding differentially edited sites and islands. It allows users to calculate units that reduce sample-bias and find differentially edited sites and islands even when the global-editing-rate of groups being compared is different. Furthermore, it also includes a variety of other features for exploring RNA editing. These make DRETools a valuable tool for further investigating epitranscriptomics.\n\n\nData availability\n\nAll RNA-seq data are publically available and were downloaded from the NCBI SRA database12. The HUVEC data sets were generated by Stellos et al., 20164 and the GM12787 and K562 cells by the ENCODE project11. Lists of accession numbers, pipelines used to generate analyses, and intermediate files generated are archived on Zenodo7.\n\n\nSoftware avalibility\n\nSource code available from: http://dretools.bitbucket.io/.\n\nData and analysis pipelines: https://zenodo.org/record/14006485.\n\nSource code at time of publication: https://zenodo.org/record/14000057.\n\nLicense: The software, and data and analysis pipelines are available under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.",
"appendix": "Author contributions\n\n\n\nTW : Conception, Analysis, Investigation, Methodology, Project Administration, Software, Validation, Visualization, Writing - Original Draft Preparation, Writing - Review & Editing\n\nPT: Conception, Investigation, Methodology, Validation, Visualization, Writing – Original Draft Preparation\n\nAD: Supervision, Writing – Review & Editing\n\nER: Funding Acquisition, Writing – Review & Editing\n\nSU: Conceptualization, Funding Acquisition, Project Administration, Resources, Supervision, Validation, Writing – Original Draft Preparation, Writing – Review & Editing\n\n\nGrant information\n\nFunding provided by the V.V. Cooke Foundation (Kentucky, U.S.A.); University of Louisville 21st Century University Initiative on Big Data in Medicine (Z1762); National Institutes of Health (NIH; P20GM103436); and the startup funding from the Mansbach Family, the Gheens Foundation and other supporters at the University of Louisville. Its contents are solely the responsibility of the authors and do not represent the official views of the funding organization.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThis work utilized the University of Louisville Cardinal Research Cluster.\n\n\nReferences\n\nPorath HT, Knisbacher BA, Eisenberg E, et al.: Massive A-to-I RNA editing is common across the Metazoa and correlates with dsRNA abundance. Genome Biol. 2017; 18(1): 185. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDiroma MA, Ciaccia L, Pesole G, et al.: Elucidating the editome: bioinformatics approaches for RNA editing detection. Brief Bioinform. 2017. PubMed Abstract | Publisher Full Text\n\nSrivastava PK, Bagnati M, Delahaye-Duriez A, et al.: Genome-wide analysis of differential RNA editing in epilepsy. Genome Res. 2017; 27(3): 440–450. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStellos K, Gatsiou A, Stamatelopoulos K, et al.: Adenosine-to-inosine RNA editing controls cathepsin S expression in atherosclerosis by enabling HuR-mediated post-transcriptional regulation. Nat Med. 2016; 22(10): 1140–1150. PubMed Abstract | Publisher Full Text\n\nWeirick T: Pipelines and intermediate files used for testing DRETools (Version 1) [Data set]. Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1400648\n\nJohn D, Weirick T, Dimmeler S, et al.: RNAEditor: easy detection of RNA editing events and the introduction of editing islands. Brief Bioinform. 2017; 18(6): 993–1001. PubMed Abstract | Publisher Full Text\n\nWeirick T: DRETools source code at time of publication (Version 1). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1400005\n\nTan MH, Li Q, Shanmugam R, et al.: Dynamic landscape and regulation of RNA editing in mammals. Nature. 2017; 550(7675): 249–254. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXiang JF, Yang Q, Liu CX, et al.: N6-Methyladenosines Modulate A-to-I RNA Editing. Mol Cell. 2018; 69(1): 126–135.e6. PubMed Abstract | Publisher Full Text\n\nJepson JE, Reenan RA: RNA editing in regulating gene expression in the brain. Biochim Biophys Acta. 2008; 1779(8): 459–470. PubMed Abstract | Publisher Full Text\n\nENCODE Project Consortium, Birney E, Stamatoyannopoulos JA, et al.: Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. Nature. 2007; 447(7146): 799–816. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeinonen R, Sugawara H, Shumway M, et al.: The sequence read archive. Nucleic Acids Res. 2011; 39(Database issue): D19–D21. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "37753",
"date": "13 Sep 2018",
"name": "Ernesto Picardi",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Weirick et al. introduces a tool-suite to calculate differential RNA editing.\nThe interest towards RNA editing is rapidly growing and, thus, similar tools to improve the investigation of RNA editing in different experimental conditions are demanding.\nDRETools include some functions to mainly post-process results from RNAEditor, developed in the same research group. Even though they can be applied to results from other tools after an ad hoc parsing.\nCalculations are based on the definition of EPK (editing per kilobase), in turn, based on the overall editing concept introduced by Tan et al. 2017. The overall editing is simply calculated as the total number of reads with G at all known editing positions as compared to all reads covering the position. In other terms, this metric is a global editing frequency per sample.\nAuthors multiply the overall editing by 1000 in order to improve the readability because in same cases very low numbers may appear. Although authors show that EPK values are useful over the raw count of editing sites, the properties of EPK are not well investigated. The number of As and Gs is dependent on filters used to detect editing and the quantity of reads generated by sequencing. Base quality is also an additional factor to consider. I’m not completely sure if EPK can take into account the number of reads per sample. I suggest to perform further investigations calculating global EPK in samples belonging to the same tissue. For example, authors could use GTEx RNAseq from three or four tissues and at least 10 experiments per tissue.\nOther authors proposed similar indices to detect editing activity in a sample. For example, Paz-Yaacov introduced the Alu editing index, a robust measure that can be calculated also on other additional genomic properties (recoding sites, conserved sites and so on). This index has been successfully used in several cases and authors need to perform appropriate comparisons.\nRegarding statistical tests used to detect differential editing, authors implement a linear model and the t-test. In figure 1 (panels E to H), p-values distributions are shown and great differences seem to appear. Authors should discuss the reason why of these observed discrepancies. I suggest authors to check the use of non parametric tests since they could be robust in case of small samples or when users cannot easily establish if normality and other assumptions are encountered.\nAdditionally, the tool does not take into account the correction for multiple testing. So it needs to be implemented.\nIn humans, RNA editing has different properties depending on affected genomic regions. For example, Alu editing is different from recoding editing. Is there a way to take such properties into account?\nFinally, DRETools features should better described in the manuscript and details about input and output files should be included in the wiki pages.\nSome experimental validations are required to corroborate tool results.\nIn my opinion DRETools are useful but major improvement is deeply needed.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
},
{
"id": "37750",
"date": "13 Sep 2018",
"name": "Yicheng Zhao",
"expertise": [
"Reviewer Expertise Non coding RNA function",
"RNA editing and related bioinformatics tech"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nCurrently, more research has focused on human RNA editing. This software is very useful in detecting human differentially edited sites and islands. However, I suggest the authors to add some brief description about RNAEditor in the method section, which will be helpful for understanding how to handle and analyse RNA editing via RNA seq data. Besides, the authors should provide the required hardware configuration for running DREtools, and the run times when analysing each testing sample.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3985",
"date": "19 Sep 2018",
"name": "Shizuka Uchida",
"role": "Author Response",
"response": "Thank you very much for your valuable comments. We have now added a brief description of RNAEditor in the Method section. Furthermore, we have included the required hardware configuration for running DRETools along with run times when analyzing each testing sample."
}
]
}
] | 1
|
https://f1000research.com/articles/7-1366
|
https://f1000research.com/articles/6-2167/v1
|
21 Dec 17
|
{
"type": "Research Article",
"title": "Ocular surface symptoms among individuals exposed to ambient levels of traffic derived air pollution – a cross-sectional study",
"authors": [
"Nabin Paudel",
"Sanjeev Adhikari",
"Sarina Manandhar",
"Ashesh Acharya",
"Ajit Thakur",
"Bhairaja Shrestha",
"Sanjeev Adhikari",
"Sarina Manandhar",
"Ashesh Acharya",
"Ajit Thakur",
"Bhairaja Shrestha"
],
"abstract": "Background: The ocular surface is separated by a thin layer of tear film from outdoor air pollutants making individuals exposed to outdoor air pollution prone to various ocular surface disorders. The aim of this study was to determine the magnitude of ocular surface disorders symptoms among traffic police officers of Kathmandu, Nepal.\n\nMethods: Two hundred traffic police officers working at different traffic police office branches of Kathmandu, Nepal were invited to the police headquarters for eye and vision examination. Among them, 91 individuals (95% males) completed the ocular surface disease index (OSDI) questionnaire and underwent Schirmer’s I tear test.\n\nResults: Symptoms of ocular surface disorders were reported by over 80% of the individuals. Approximately two fifths of the individuals (38%) reported severe symptoms. Only 17% of the individuals’ tear secretion was found to be below normal using the Schirmer’s tear test. There was no association between the OSDI score and Schirmer’s tear test scores (r = 0.008, p = 0.94). A weak but significant relationship was observed between the OSDI score and job duration (r=0.21,p = 0.04). Individual exposed to outdoor air pollution for more than 10 years had higher odds of reporting ocular surface complaints as compared to those who were exposed for less than 10 years (OR = 3.94, p = 0.02).\n\nConclusion: Ocular surface disorder symptoms are common among traffic police officers of Kathmandu, Nepal. The duration of exposure appears to significantly contribute to the increased symptoms in this vulnerable population.",
"keywords": [
"Air pollution",
"ocular surface",
"OSDI questionnaire",
"Kathmandu",
"Dry Eye"
],
"content": "Introduction\n\nStudies conducted so far on air pollution and the human ocular surface have demonstrated a link between air pollution and ocular discomfort, abnormal tear structure, and ocular surface inflammation1. There are only a handful of studies demonstrating the association between the signs and symptoms of the ocular surface with air pollution2,3. Studies are even more infrequent from cities in developing countries, where the concentration of air pollutants in the environment is on the rise. Kathmandu is considered as one of the most highly polluted cities in the world, and Nepal is listed as one of the most polluted countries according to the WHO urban air pollution database. Traffic police officers in Kathmandu spend most of their time outdoors, controlling the flow of vehicles because of the unavailability of modern electronic traffic management systems in the city.\n\nThe purpose of this study was to determine the magnitude of ocular surface disorders based on a subjective symptoms questionnaire and a commonly used tear secretion test (Schirmer’s I test), and then explore the association between these two tests in traffic police officers of Kathmandu, Nepal.\n\n\nMethods\n\nThis study involved a cross-sectional, community-based assessment on 91 traffic police officers (86 male, 5 female) recruited among the officers of the Traffic Metropolitan head office, Baggikhana, Kathmandu, Nepal. The participants were invited by word of mouth by the head officer along with a formal written notice. Participants with any chronic illness, smoking habit, taking any systemic drugs, having any ocular diseases, previous ocular surgery and current contact lens wear were excluded from the study. All of the individuals had presenting visual acuity of better than 20/25 at both near and far. Only those participants who met our inclusion criteria and agreed to participate were included in the study. The study was conducted in the month of August 2017.\n\nThe study protocol was approved by the Ethics Committee of the Nepal Health Research Council (Reg.No.,218/2017). The study was part of a larger program that was aimed at determining ocular and visual disorders in police officers. All of the research participants provided their written informed consent for participation before being enrolled in the study. The Declaration of Helsinki was followed while assessing the participants.\n\nThe Ocular Surface Disease Index Questionnaire4 is a validated tool to assess the subjective symptoms of individuals with potential ocular surface disorders. The Nepali translated OSDI questionnaire was administered to all of the participants before conducting the clinical assessment. An OSDI score of 0–12 was considered normal, 13–22 as mild, 23–32 as moderate, and 33–100 as a severe ocular surface disorder5.\n\nThe Schirmer I tear test was conducted under topical anesthesia (0.5% Proparacaine). The test was conducted in an indoor setting at room temperature. After instilling one drop of proparacaine in each eye, the eye was dried with cotton for any residual drop. The Schirmer strip was then placed on the lateral 1/3rd aspect of the lower eye lid taking special care not to touch the cornea. The strip was removed from the lid after 5 minutes. The measurement of 5mm or less was considered as abnormal.\n\nRoutine ophthalmological examination including visual acuity, refraction, anterior segment assessment and posterior segment assessment was also conducted but were not analysed as a part of this study\n\nOther variables such as age, gender, and duration of working as a traffic officer were also recorded.\n\nThe OSDI score was calculated using the following formula:\n\n(Sum of scores for all questions answered) X 25\n\n(total number of questions answered)\n\nData are presented as mean±SD unless mentioned otherwise. Independent sample t-test was employed for comparing mean between two groups whereas one way ANOVA, along with appropriate posthoc tests, was employed for comparing means between three or more groups. Pearson correlation was employed to determine the association between variables. Binary logistic regression analysis was also employed to determine association between dependent and independent variables. Statistical analysis was conducted using SPSS V22, IBM, California\n\n\nResults\n\nThe mean age of the participants was 32±6 years. The OSDI questionnaire was completed by all subjects. The mean OSDI score was 30.11±19.70 (range 2 to 97.90). Based on the OSDI score, 81% of the participants reported symptoms of ocular surface disorder; over one third (38%) of the participants reported symptoms of severe ocular surface disorder (Figure 1).\n\nSchirmer’s test of both eyes was conducted in 91% of the participants. The mean ± SD Schirmer’s test value (mm) for right eye (RE) and left eye (LE) was 16.12±10.42 and 17.42±10.84, respectively. There was a high correlation (r= 0.80, p<0.001) but a non-significant difference (p=0.08) in the Schirmer's test score between the two eyes. Hence the results of the right eye only were used for analysis. Only 17% of the subjects’ Schirmer’s score showed abnormal results.\n\nNo association was observed between the OSDI score and the Schirmer’s test results (r=0.008, p=0.94). No significant correlation was also observed between the OSDI scores and age (r=0.15, p=0.14). A weak, but statistically significant, positive correlation was observed between OSDI score and duration of work (r=0.21, p=0.04). The mean duration of work was 11±6 years. Individuals who had held the job for more than 5 years had severe symptoms, as compared to those who had held the job for less than five years (p=0.001). A one way ANOVA test demonstrated a significant difference in the OSDI score between different age groups (<30, 30–40 and >40 years) (F2,88 = 3.86, p=0.025). The symptoms score was statistically significantly different between individuals who had worked for up to 5 years, six to ten years (mean difference, 13.65 ± 6.44, 95% CI, 0.85 to 26.46) and more than 10 years (mean difference, 16.48± 5.93, 95% CI, 4.70 to 28.27). However, no statistically significant difference was observed between individuals who had held the job for 6–10 years and >10 years (mean difference 2.82 ± 4.54, 95% CI, - 6.20 to 11.50) (Figure 2). Furthermore, individuals who held the job for 10 years or more had significantly higher odds of having ocular surface symptoms as compared to those who had the job for less than ten years (OR: 3.94, 95% CI, 1.25-12.8, p = 0.02). There was a slight increase in the odds of having ocular surface symptoms after adjusting for age and gender, but it was borderline significant (OR: 4.28, 95% CI, 0.93-19.58, p = 0.05)\n\n(** = statistically significant, * = non-significant). Error bars denote standard deviation.\n\nOf the 74 subjects identified as having symptoms of ocular surface disorders according to the OSDI score, only 16 were identified as abnormal by the Schirmer’s test.\n\n\nDiscussion\n\nThis study explored the symptoms of ocular surface disorders among individuals exposed to traffic-derived air pollution in Kathmandu, Nepal. A remarkable number of individuals reported symptoms with over one third reporting symptoms of severe ocular surface disorder. Ocular surface disorder vary with age, whereby the prevalence is 11% among individuals between 40 to 59, and 18% in individuals above 806. In this study, individuals were between 18 to 48 years, and 80% had symptoms of OSD, which is alarmingly high as compared to the general population6.\n\nPrevious reports exploring symptoms in individuals exposed to traffic-derived air pollution have found mixed results. The Torricelli et al.7 study in a group of 71 taxi drivers and traffic controllers reported that most of their subjects reported few symptoms, and fell within the normal category according to the OSDI scoring. However, they demonstrated that objective tests such as tear osmolarity and break up time were significantly reduced. In contrast, Saxena et al. reported that most of the subjects who were exposed to air pollution had more symptoms (irritation, itching, lacrimation, and redness) as compared those who were not exposed2.\n\nA majority of the individuals’ Schirmer’s results were within normal range in the present study. Similar normal findings of Schirmer’s test have been found by previous researchers7,8. This finding is not surprising as the poor diagnostic ability of the Schirmer’s test for detecting ocular surface dysfunction has been well recorded in the literature9. The Schirmer’s test has shown normal results in many previous studies conducted among established dry eye population9.\n\nThe lack of correlation between the OSDI scores and the Schirmer’s results is also not surprising, as this finding is consistent with most of the previous studies where the signs and symptoms of ocular surface disorders, particularly that of the dry eyes, are not correlated with one another10. It is postulated that dry eye is a multifactorial disorder, and different mechanisms and factors act in compliment or may act independently to elicit the symptomatology of this condition11.\n\nThe weak but statistically significant positive correlation between the OSDI score and duration of holding the current job (years) implies that the longer the exposure, the more severe the symptoms. However, the finding that the mean symptom score is not significantly different between individuals who have held the job for 6–10 years and in those over 10 years signifies that exposure to ambient air pollution over 5 years poses a significant impact on the ocular surface. Furthermore, the higher odds of having ocular symptoms in individuals with over 10 years of holding the job implies that the effect of air pollution on the ocular surface may have a cumulative effect over the years until symptoms start to appear.\n\nNepal was ranked as the 177th country just above China, Bangladesh, and India among the 180 countries with air quality issues according to the Environmental Performance Index (EPI) of 201612. A report in 2007 on the air pollution concentration, specifically of the PM2.5 of the Kathmandu valley, was found to be 17-18 fold higher than the recommended 25ug/m3 threshold provided by the WHO. A 2016 air pollution report of Nepal provided by the WHO has shown a considerable increase in PM2.5 concentration over a decade13. Our study was conducted in the month of August 2017. The mean 24-hour average PM2.5 concentration during that month was 113.5 ug/m3(approximately 5 fold higher than that recommended by the WHO) and the PM10 concentration was 633ug/m3(approximately 13 fold higher than the WHO recommendation) (see Kathmandu Air Pollution: Real-time Air Quality Index and Department of Environment, Air Quality Monitoring). In light of the high levels of air pollution of Kathmandu, the higher number of individuals reporting severe symptoms of ocular surface disease in our study can be explained.\n\nWhile this study provided novel ocular health issues in this vulnerable population, some limitations must be acknowledged. Firstly, only two tests were used – the OSDI questionnaire and the Schirmer’s I test to determine ocular surface disorder. Use of more sensitive tests such as corneal and conjunctival staining, tear film break up time and tear osmolarity would have detected more individuals with ocular surface disorders, and may also have demonstrated structural/physiological anomalies of the ocular surface. However, as this was a community-based study, tests were chosen based on the non-requirement of sophisticated clinical instruments and investigations. Secondly, the actual duration and concentration of air pollution exposures in our subjects were not assessed. Measurement of the PM2.5 and NO2 concentration, along with a range of ocular surface disorder diagnostic tests like that of a few previous studies, would have provided us a better understanding of the association between air pollution and ocular surface disorders. Thirdly, a comparison with a control group of individuals who were not exposed to a different level of air pollution would have confirmed that the ocular symptoms were primarily due to air pollution. Nevertheless, this study was a first step toward generating awareness, and exploring symptoms related to ocular surface disorder in the vulnerable population. Future large-scale studies need to be conducted in city areas to explore ocular surface anomalies in this vulnerable population and necessary precautions are taken in order to protect the ocular health of people exposed to outdoor air pollution.\n\n\nConclusion\n\nTraffic police officers of Kathmandu valley have a high prevalence of ocular surface complaints, which do not correlate well with the subjective tear secretion test. The duration of job appears to somewhat contribute to the increasing symptoms. In the meantime, the use of protective sunglasses and regular eye consultations for people who are exposed to outdoor air pollution is recommended. More importantly, the government must implement new rules to reduce the levels of outdoor air pollution.\n\n\nData availability\n\nDataset 1: Data on the ocular surface symptoms among individuals exposed to ambient levels of air pollution. DOI: 10.5256/f1000research.13483.d18859112",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nTorricelli AA, Novaes P, Matsuda M, et al.: Ocular surface adverse effects of ambient levels of air pollution. Arq Bras Oftalmol. 2011; 74(5): 377–81. PubMed Abstract | Publisher Full Text\n\nSaxena R, Srivastava S, Trivedi D, et al.: Impact of environmental pollution on the eye. ActaOphthalmologica. 2003; 81(5): 491–4. PubMed Abstract | Publisher Full Text\n\nVersura P, Profazio V, Cellini M, et al.: Eye discomfort and air pollution. Ophthalmologica. 1999; 213(2): 103–9. PubMed Abstract | Publisher Full Text\n\nMiller KL, Walt JG, Mink DR, et al.: Minimal Clinically Important Difference for the Ocular Surface Disease Index. Arch Ophthalmol. 2010; 128(1): 94–101. PubMed Abstract | Publisher Full Text\n\nMoss SE, Klein R, Klein BE: Prevalence of and risk factors for dry eye syndrome. Arch Ophthalmol. 2000; 118(9): 1264–1268. PubMed Abstract | Publisher Full Text\n\nTorricelli AA, Novaes P, Matsuda M, et al.: Correlation between signs and symptoms of ocular surface dysfunction and tear osmolarity with ambient levels of air pollution in a large metropolitan area. Cornea. 2013; 32(4): e11–5. PubMed Abstract | Publisher Full Text\n\nNovaes P, Saldiva PH, Matsuda M, et al.: The effects of chronic exposure to traffic derived air pollution on the ocular surface. Environ Res. 2010; 110(4): 372–4. PubMed Abstract | Publisher Full Text\n\nCho P, Yap M: Schirmer test. I. A review. Optom Vis Sci. 1992; 70(2): 152–6. PubMed Abstract | Publisher Full Text\n\nNichols K, Nichols J, Mitchell G: The lack of association between signs and symptoms in patients with dry eye disease. Cornea. 2004; 23(8): 762–770. PubMed Abstract | Publisher Full Text\n\nLemp MA: Advances in Understanding and Managing Dry Eye Disease. Am J Ophthalmol. 2008; 146(3): 350–356. PubMed Abstract | Publisher Full Text\n\nHsu A, Zomer A: Environmental Performance Index. Wiley StatsRef: Statistics Reference Online. 2016; 1–5. Publisher Full Text\n\nPaudel N, Adhikari S, Manandhar S, et al.: Dataset 1 in: Ocular surface symptoms among individuals exposed to ambient levels of traffic derived air pollution – a cross-sectional study. F1000Research. 2017. Data Source\n\nSchiffman R, Christianson M: Reliability and validity of the ocular surface disease index. ArchOphthalmol. 2000; 118(5): 615–21. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "31452",
"date": "22 Jun 2018",
"name": "Luc LR Int Panis",
"expertise": [
"Reviewer Expertise Traffic related air pollution and health"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes a limited analysis of eye symptoms in a small group of traffic police officers in Kathmandu, Nepal. Despite the limitations of the cross-sectional set-up and limited resources, this analysis merits publication because of the small volume of studies on ophthalmological effects of air pollution and the health effects of air pollution in non-western countries with extremely high air pollution exposures. The article would benefit from a brief description of concentrations of other common (gaseous or solid) air pollutants (NO2, CO, BC) during August 2017 and annual averages in the previous years in Kathmandu if available.\n\nThe total number of questions in the OSDI questionnaire could be mentioned to facilitate the interpretation of the formula used.\n\nThe description of the Schirmer I test is too brief to be easily understood by air pollution scientists who are no experts in ophthalmology. To consider only values <5 mm as abnormal seems to be a very strict definition e.g. compared to results presented by Karampatakis et al.1. Based on RE & LE combined the % of abnormal tests is less than 15% (not 17%). Also there is no explanation about the OSDI of the 9% of participants that did not undergo a Schirmer I test, this could potentially lead to biased results.\n\nIn Figure 1 the categories of Mild (23%) and Moderate (20%) seem to have been mixed up. The authors should double check whether this also happened with other categorizations (not available from the provided data) and repeat the statistical analysis if necessary. By providing more original (not categorized) data reanalysis of the data by other researchers would also be facilitated. The methods section does not mention that a paired t-test was used to compare results of the Schirmers I test for RE and LE.\n\nBecause of the cross-sectional set-up of the study it would be more prudent to avoid the word ‘implies’ in the discussion and instead use ‘suggests’.\n\nThe authors provide a good assessment of some of the limitations/weaknesses of their study. Including the lack of a control group, lack of personal/detailed exposure data or on groups with a different level of exposure and lack of more sophisticated eye tests. The inherent limitations of the cross-sectional set-up should also be mentioned.\n\nThe study population of traffic police officers could perhaps better be characterized as a ‘more exposed’ population instead of ‘vulnerable’ unless there would be a reason why these adults would be more vulnerable than other population groups.\n\nWith respect to their recommendation of wearing sunglasses the authors should provide a reference to a source providing evidence for the benefits of such an intervention. Were the traffic police officers questioned on the frequency of use of sunglasses during work?\n\nSome minor language issues could be corrected.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3968",
"date": "18 Sep 2018",
"name": "Nabin Paudel",
"role": "Author Response",
"response": "Dear Dr Panis, Thank you very much for taking the time to review our manuscript. We have done our best to address your comments. We hope that our reply is satisfactory and that we have responded to, and dealt with, all comments adequately. Please see responses below: Comment: This paper describes a limited analysis of eye symptoms in a small group of traffic police officers in Kathmandu, Nepal. Despite the limitations of the cross-sectional set-up and limited resources, this analysis merits publication because of the small volume of studies on ophthalmological effects of air pollution and the health effects of air pollution in non-western countries with extremely high air pollution exposures. Response: Thank you very much for your kind comments. We agree with the limitations of the study but hope that this study will be the first one to raise awareness among affected individuals and concerned parties regarding the effect of air pollution on ocular health. Comment: The article would benefit from a brief description of concentrations of other common (gaseous or solid) air pollutants (NO2, CO, BC) during August 2017 and annual averages in the previous years in Kathmandu if available. Response: Thank you very much. We apologise for the unavailability of the NO2, CO and BC data for the month of August 2017 as Nepal does not have a national mechanism to collect these data. We have gathered as much relevant information as we can from the published literature and have incorporated in the latest version. Comment: The total number of questions in the OSDI questionnaire could be mentioned to facilitate the interpretation of the formula used. Response: This has now been incorporated with the OSDI questionnaire attached as a supplementary file. Comment: The description of the Schirmer I test is too brief to be easily understood by air pollution scientists who are no experts in ophthalmology. To consider only values <5 mm as abnormal seems to be a very strict definition e.g. compared to results presented by Karampatakis et al.1 Response: Thank you for this thoughtful comment. We have included some additional information regarding the Schirmer's test. The value of <5mm for abnormal was based on its diagnostic accuracy. This has been briefly mentioned in the manuscript. Comment: Based on RE & LE combined the % of abnormal tests is less than 15% (not 17%). Also there is no explanation about the OSDI of the 9% of participants that did not undergo a Schirmer I test, this could potentially lead to biased results. Response: Thank you for pointing this out. We have rectified the percentage. We apologise for the lack of clarification regarding the 91% who underwent the Schirmer Test. This data was based on an earlier data analysis. We did not include the data from those patients who had incomplete information hence they were not analysed in this study. Only participants who had both the Schirmer's and OSDI scores were included in the study. Comment:In Figure 1 the categories of Mild (23%) and Moderate (20%) seem to have been mixed up. The authors should double check whether this also happened with other categorizations (not available from the provided data) and repeat the statistical analysis if necessary. By providing more original (not categorized) data reanalysis of the data by other researchers would also be facilitated. Response: We have now rectified the percentage. This was an error during plotting the graph. We confirm that this has not affected any other analysis. The dataset file consists of all the information that we collected and were relevant to the study project. The categorisation of age group was suggested by the editorial office. Comment: The methods section does not mention that a paired t-test was used to compare results of the Schirmers I test for RE and LE. Response: Thank you very much. We have mentioned it now. Comments: Because of the cross-sectional set-up of the study it would be more prudent to avoid the word ‘implies’ in the discussion and instead use ‘suggests’. Response: We have changed to suggests as recommended. Comments: The authors provide a good assessment of some of the limitations/weaknesses of their study. Including the lack of a control group, lack of personal/detailed exposure data or on groups with a different level of exposure and lack of more sophisticated eye tests. The inherent limitations of the cross-sectional set-up should also be mentioned. Response: Thank you so much for your kind comments. We have added the cross-sectional nature of the study as one of the limitations and have advised readers that the results must be interpreted with caution. Comments: The study population of traffic police officers could perhaps better be characterized as a ‘more exposed’ population instead of ‘vulnerable’ unless there would be a reason why these adults would be more vulnerable than other population groups. Response: We have changed vulnerable to more exposed. Thank you. Comment: With respect to their recommendation of wearing sunglasses the authors should provide a reference to a source providing evidence for the benefits of such an intervention. Were the traffic police officers questioned on the frequency of use of sunglasses during work? Response: We have added a reference that reports the beneficial effect of glasses on ocular surface disorders such as dry eyes. Unfortunately, we did not ask any questions regarding the frequency of the use of sunglasses during work but will definitely consider for future studies. Comments: Some minor language issues could be corrected. Response: We have attempted to reduce such issues as much as possible."
}
]
},
{
"id": "35769",
"date": "20 Aug 2018",
"name": "Monique Matsuda",
"expertise": [
"Reviewer Expertise Air pollution and effects on the ocular surface"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study evaluated the eye symptoms and the lacrymal production of traffic police officers in Kathmandu, Nepal, a city with high levels of air pollution.\nAs the study refers to the effects of air pollution on the ocular surface of traffic police officers, it is necessary to mention the average concentration of air pollutants during August 2017, mainly PM2.5 and NO2. Vehicles are generally the main responsible for the emission of nitrogen oxides, since in Kathmandu vehicular traffic seems to be very intense, the additional information about these air pollutants is essential as part of the study.\nIn addition to the air pollutants, I suggest mentioning the meteorological data, such as humidity and temperature, since these climatic factors may influence the clinical parameters of the ocular surface.\nThe study was conducted during the hot and rainy season. In this period there is an increase of allergies and conjunctivitis and among the symptoms, an increased frequency of itching, foreign body sensation and photophobia. In addition, the use of fans and air conditioning is greater. All these factors could influence and favor the appearance of symptoms and the increase of OSDI score. Thus, in future studies, it would be interesting to carry out the same tests and ophthalmologic examinations during the dry season in the same group of traffic police officers.\nBesides that, OSDI checks the symptoms during the last week. The reproducibility of symptoms of the OSDI questionnaire at different periods could be indicative of the prevalence of symptoms for a long period due to air pollution exposure and could be more certainty correlate with the working time in the traffic. I suggest that the correlation between the OSDI score and working time should be mentioned with caution in the text, once this is a cross-sectional study.\nI recommend the availability of an OSDI table describing the frequency of symptoms in the attachment.\nThe authors mentioned very well the limitations of the study. Despite the application of only two ophthalmological parameters (OSDI questionnaire and Schirmer test), I recommend the indexing of the article since studies about the effects of air pollution in areas of high air pollutant levels, as in Kathmandu, it is necessary to evidence its effects on the ocular surface and to strengthen public policies in this area.\nI suggest minor language revision.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3967",
"date": "18 Sep 2018",
"name": "Nabin Paudel",
"role": "Author Response",
"response": "Dear Dr Matsuda, Thank you for your time to review our manuscript and providing us with an opportunity to revise the manuscript. We have endeavoured to address each comment and suggestions. Please see below. Comment: \"As the study refers to the effects of air pollution on the ocular surface of traffic police officers, it is necessary to mention the average concentration of air pollutants during August 2017, mainly PM2.5 and NO2. Vehicles are generally the main responsible for the emission of nitrogen oxides, since in Kathmandu vehicular traffic seems to be very intense, the additional information about these air pollutants is essential as part of the study.\" Response: Thank you very much for your comment. We agree with the comments. We had already provided the August 2017 data on PM2.5 and PM10 concentration in the first version. As there is no national mechanism to measure the NO2 system in Nepal the data from August 2017 is not available. However, in our latest version, we have included the NO2 data of 2015. Please refer to the introduction. Comment: In addition to the air pollutants, I suggest mentioning the meteorological data, such as humidity and temperature, since these climatic factors may influence the clinical parameters of the ocular surface. Response: Thank you. The meteorological data has now been added. Comment: The study was conducted during the hot and rainy season. In this period there is an increase of allergies and conjunctivitis and among the symptoms, an increased frequency of itching, foreign body sensation and photophobia. In addition, the use of fans and air conditioning is greater. All these factors could influence and favor the appearance of symptoms and the increase of OSDI score. Thus, in future studies, it would be interesting to carry out the same tests and ophthalmologic examinations during the dry season in the same group of traffic police officers. Response: We absolutely agree with the comment. We have included these factors in the discussion. We also agree that it would be interesting to carry out the same tests and ophthalmologic examination during the dry season in the same group to gain a better understanding of the causative factors of ocular symptoms in this population. We are planning such a study at the moment. Comment: Besides that, OSDI checks the symptoms during the last week. The reproducibility of symptoms of the OSDI questionnaire at different periods could be indicative of the prevalence of symptoms for a long period due to air pollution exposure and could be more certainty correlate with the working time in the traffic. I suggest that the correlation between the OSDI score and working time should be mentioned with caution in the text, once this is a cross-sectional study. Response: Thank you very much for this close observation. We have mentioned the cross-sectional nature of the study as one of the limitations and hence have advised that caution must be applied before interpreting the research findings. Comment: I recommend the availability of an OSDI table describing the frequency of symptoms in the attachment. Response: We have attached the OSDI form as an attachment. Comment: The authors mentioned very well the limitations of the study. Despite the application of only two ophthalmological parameters (OSDI questionnaire and Schirmer test), I recommend the indexing of the article since studies about the effects of air pollution in areas of high air pollutant levels, as in Kathmandu, it is necessary to evidence its effects on the ocular surface and to strengthen public policies in this area. Response: We are thankful for your kind comments."
}
]
}
] | 1
|
https://f1000research.com/articles/6-2167
|
https://f1000research.com/articles/7-684/v1
|
31 May 18
|
{
"type": "Research Note",
"title": "Preliminary study on the inhibitory effect of seaweed Gracilaria verrucosa extract on biofilm formation of Candida albicans cultured from the saliva of a smoker",
"authors": [
"Zaki Mubarak",
"Adintya Humaira",
"Basri A. Gani",
"Zainal A. Muchlisin",
"Adintya Humaira",
"Basri A. Gani",
"Zainal A. Muchlisin"
],
"abstract": "Background: Candida albicans is an opportunistic fungus that infects the oral cavity. Increases in colony numbers of C. albicans can be caused by multiple factors, such as smoking, a weakened immune system, taking antibiotics and with immune-compromised individuals. Smoking can increase the virulence factor of C. albicans and make it stronger. One of the virulence factors of C. albicans is the biofilm it forms. The C. albicans biofilm makes it more tolerant to extracts of the seaweed Gracilaria verrucosa, which has antifungal activity. The objective of the study was to examine the ability of the G. verrucosa extracts to inhibit the formation of biofilm by C. albicans obtained from the saliva of smoker. Methods: A total of six concentrations of G. verrucosa (6.25, 12.5, 25, 50, 75 and 100%) were tested in this study. The positive control was fluconazole 0.31 µg/ml C. albicans was taken from the saliva of one smoker in Faculty of Dentistry, Syiah Kuala University. The total amount of biofilm was assessed using an ELISA reader. The data were subjected to Kruskal-Wallis test at a significance limit of p<0.05. Results: The seaweed extract has three bio-active compounds: steroids, terpenoid, and tannins. The results showed that the inhibitory activity of seaweed on C. albicans biofilm formation increases as its concentration increases. The highest effectiveness was recorded at a seaweed concentration of 100% at 48 h of exposure. Conclusions: The optimal inhibition of the C. albicans biofilm formation was recorded at the concentration of 100% G. verrucosa after 48 hours of exposure.",
"keywords": [
"Candida albicans",
"oral candidiasis",
"seaweed Gracilaria verrucosa"
],
"content": "Introduction\n\nSmoking is a common problem in most developing countries, including Indonesia. Based on a survey by The Tobacco Atlas in 2015, Indonesia has the highest number of smokers in Asia, with 66% of men in Indonesia being active smokers1. Smoking can lead to addiction owing to the nicotine contents, and harm due to the presence of toxic compounds such as CO, ammonia and tar contents in tobacco1. Besides causing addiction, substances in cigarettes can also cause various diseases, such as oral candidiasis. Oral candidiasis is caused by the infection of the fungus Candida albicans2. This fungus is part of the normal flora of the human mouth, but it can become pathogenic in certain conditions, for example, due to nicotine exposure3.\n\nInfection with C. albicans will increase the formation of a biofilm of the fungus3. The biofilm is an extracellular matrix consisting of C. albicans colonies4. The size of the biofilm increases when exposed to substances in cigarette smoke, as the cigarette has content that can initiate growth and nourish C. albicans5,6.\n\nCurrently, fluconazole and nystatin are the most effective drugs for treating oral candidiasis. Unfortunately, these drugs have side effects; for example the prolonged use of fluconazole leads to resistance7, while high dosages of nystatin give gastrointestinal discomfort and increase plaque formation8. Therefore, plant-derived antifungals may be a viable oral treatment option for candidiasis. One of these potential plants is seaweed Gracilaria verrucosa. This seaweed contains several bioactive compounds, including alkaloids, flavonoids, phenolics, saponins, steroids and terpenoids9. Aceh Province, Indonesia, has large G. verrucosa resources, although so far this aquatic plant has not been commonly used for medicinal purposes. Hence, the objective of the present study was to examine the ability of seaweed extract to inhibit the growth of C. albicans obtained from smoker saliva, as indicated by biofilm formation.\n\n\nMethods\n\nThe study was conducted in August 2017 at The Laboratory of Microbiology, Veterinary Faculty, Syiah Kuala University. C. albicans was extracted from the saliva of one volunteer active smoker volunteer in Faculty of Dentistry Medicine, Syiah Kuala University. The volunteer was asked directly and accepted, giving written informed consent. The inclusion criteria of the volunteer was an active smoker that smoke at least 20 cigarretes per day. The saliva was collected once the volunter finished smoking. the G. verrucosa seaweed was collected from a farmer in Pulo Aceh, Aceh Province. The subject provided written informed consent to participate in this study. Ethical clearance (No. 1741/UN11.1.21/TU/2017) was obtained from Faculty of Dentistry, Syiah Kuala University, Banda Aceh, Indonesia.\n\nExtraction was performed based on the Maserati method10. A total of 3 kg seaweed was washed with tap water then rewashed using distilled water. The seaweed sample was dried at room temperature 25°C for 24 h, avoiding direct sunlight. The wet seaweed was chopped into small-sized pieces (2 mm), then soaked in 96% ethanol, as a solvent. After 24 h the sample was filtered using Whatman filter paper No. 42 and the resulting residue was soaked again in 96% ethanol. This procedure was repeated until the solvent color which added to sample was not changing the color or limpid. All the filtrate collected in all of the procedures was then evaporated using a vacuum rotary evaporator (Laborta 4003 control, Heildolph) for 15 min at 60°C. The extract was taken and stored in a refrigerator at 4°C.\n\nThe saliva was collected from a volunteer active smoker in Faculty of Dentistry Medicine, Syiah Kuala University. Saliva was collected by spitting into a glass jar (15 ml), then 1 ml PBS (0.01 M, pH 7.2) was added to the jar. The jar was centrifuged at 10,000 rpm for 10 min, after which the precipitate was taken and incubated in CHROMagar Candida medium for 2 days to allow for colony development. If the colour of a colony was green, this indicated that the colony was C. albicans.\n\nFollowing culturing of C. albicans in CHROMagar Candida medium, one colony of cultured C. albicans was mixed with 5 ml peptone in a tube then incubated at 37°C for 24 h. After 24 h, the turbidity of media was compared to a 0.5 McFarland solution standard, equivalent to 1.5 x 108 CFU/ml.\n\nFlavonoid test. A total of 5 ml seaweed extract were mixed with 0.5 cm Mg band and two drops of HCl then heated by passing over a Bunsen flame. The coloration to red or purple after heating indicated the presence of flavonoids11.\n\nAlkaloid tests. A total of 5 ml seaweed extract and 8 ml HCl were mixed to homogeneity then filtered. The filtrate was then subjected to Mayer, Wagner and Dragendroff tests for alkaloids to ensure detection of any alkaloids, based on those described by Vimalkumar et al.11. For the Mayer test, approximately 2 ml filtrate was mixed with 5 g potassium mercuric iodide. The formation of white or pale precipitates indicates the presence of alkaloids. For the Wagner test, a total of 2 ml filtrate was mixed with 2 ml Wagner reagent. The formation of brown or reddish-brown precipitates indicates the presence of alkaloids. For the Dragendroff test, 2 ml of filtrate was mixed homogenously with bismuth potassium iodide solution, the red precipitates indicate the presence of alkaloid.\n\nTannin/phenolic test. Two drops of 1% FeCl3 was added to 1 ml seaweed extract. The change in the color to a blackish green indicates the presence of tannin/phenolic content12.\n\nSaponin test. A total of 1 ml seaweed extract was mixed with distilled water to 20 ml then shaken vertically for 15 s. Persistent foaming is indicative of saponin content.\n\nSteroid test. Approximately 2 ml seaweed extract was diluted in 2 ml CHCl3, a few drops of H2S and 1 ml of CH3COOH. The formation of green or blue precipitates indicates the presence of steroid11.\n\nTerpenoid test. A total of 5 ml seaweed extract was mixed in 2 ml of chloroform followed by the careful addition of 3 ml concentrated H2SO4. A layer of the reddish brown coloration was formed at the interface thus indicating a positive result for the presence of terpenoids13.\n\nA total of 100 µl casein-peptone lecithin polysorbate broth (Merck-1117230500) was prepared in each well of a 96-well plates for 5 min then the peptone was removed from the wells. A total of 50 µl cultured C. albicans, which diluted to a 0.5 McFarland standard turbidity, were added into 96-well plates and left in wells for 5 min. Next, the seaweed extracts were added at decreasing concentrations test (100, 75, 50, 25, 12.5 and 6.25%), with fluconazole 0.31 µg/ml as a control. The plates were incubated for 24, 48 or 72 h at 37°C, then approximately 200 µl of 0.1% violet crystal were added into the plates and incubated for 15 min at room temperature.\n\nAfter 15 min, each well was washed three times with 200 µl PBS. The crystal violet in each well was then dissolved in 100 µl 96% ethanol for 2 min. The biofilm formation was analyzed using an ELISA reader at 620 nm wavelength14,15.\n\nThe data were subjected to Kruskal-Wallis test using SPSS software v20.0.\n\n\nResults\n\nThe results of phytochemical tests, showed that seaweed G. verrucosa extract had the positive reaction to a steroid, terpenoid, and tanin indicates these substances are present in the seaweed (Table 1).\n\nIn general, the inhibitory effect was increased as seaweed concentration increased. Results of Kruskal-Wallis analysis (P<0.05) showed that seaweed extract significantly inhibited formation of the biofilm of C. albicans. However, a higher optical density was recorded for fluconazole (control), followed by 100% seaweed extracts in all exposure times; there were no significant differences between these treatments. The results showed that the best inhibition effect was recorded with fluconzole followed by 100% seaweed extract 48 h after exposure (Figure 1).\n\n\nDiscussion\n\nThe study showed that 100% seaweed extract is promising for inhibiting the growth of C. albicans, indicating that it has the potential to be used as an anti-fungus C. albicans to treat oral cardiosis in smokers. C. albicans is a normal micro-organism in the human mouth; however, this fungus can be pathogenic in certain circumstances3, such as in the mouth of smokers2. Smoking can increase the protein levels of HWP1, EAP1 and SAP2 in C. albicans. Higher levels of these proteins increase the virulence of C. albicans. This can then increase biofilm formation and cause oral candidiasis6. In addition, smoking can also cause a decrease in immune function, making individuals more susceptible to oral candidiasis4,6.\n\nThe results showed that treatment with 100% seaweed extract can inhibit the formation of C. albicans biofilm to an almost equivalent degree as the fluconazole (control), This activity is presumably caused by the bioactive compounds in the extract of seaweed, such as the steroids, terpenoids, and tannins that were detected in this study. According to Sampaio et al.17, the anti-fungal activity of a substance strongly depends on the composition of its bio-active compounds; these bio-active compounds have the potential to cause destruction to the biofilm and affect the viability of C. albicans; for example, steroids can kill C. albicans through their lipophilic properties, interfering with the formation of fungal spores and mycelium18. This activity weakens C. albicans, inhibiting the formation of the biofilm. The activity of the steroids requires oligosaccharides that are also present in the seaweed content to function optimally19.\n\nTerpenoids are derivatives of saponins. Terpenoids act as an antifungals by damaging the organelles of the fungi and inhibiting the secretion of enzymes, leading to inhibition of the growth of C. albicans fungal cells. Terpenoids can also damage the morphology of C. albicans20. Tannins may inhibit chitin synthesis in C. albicans cell walls; as a result, there is no protection of the C. albicans cell membrane and can cause inhibit cellular metabolism. In addition, tannin can inhibit ergosteron activity of Candida albicans21.\n\nThe effectiveness of the extracts in inhibiting fungi is influenced by at least three factors, namely the concentration, exposure time, and contact surface media22. The present study showed that the inhibitory effect of seaweed extract increased as seaweed extract concentration increased, with the best effect recorded at 48 h of exposure, this is probably because the farnesol works effectively after 48-72 h of exposure. Farnesol is a quorum-sensing molecule that has the potency to inhibit C. albicans growth23.\n\nFurther studies should be conducted to extract the individual bioactive compounds in seaweed then test their action on C. albicans at different dosages. The purpose of these further studies will be to assess which bioactive compound, and at which dosages, are playing a vital role in inhibiting the growth of C. Albicans.\n\n\nConclusion\n\nGracilaria verrucosa seaweed extract inhibited the growth of the biofilm of C. albicans isolated from the saliva of a smoker, with the inhibitory effect increasing with the concentration, up to an optimal concentration of 100% at 48 h of exposure.\n\n\nData availability\n\nDataset 1. The raw data of the Triplo anti-Biofilm seaweed to C. albicans for 24, 48 and 72 h at a wavelength 620 nm. DOI: 10.5256/f1000research.14879.d20427016.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors declare that no funding was involved in supporting this work.\n\n\nReferences\n\nLian TY, Dorotheo U: The Tobacco Control Atlas. Seutawan Co. 2016; 3. Reference Source\n\nMayer FL, Wilson D, Hube B: Candida albicans pathogenicity mechanisms. Virulence. 2013; 4(2): 119–128. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdel E, Khadijed PS, Mohammad M: Oral Cavity Candidiasis as a Complication of Fungal Diseases in Diabetic Patients in South-East of Iran. IJIAS. 2016; 14(4): 1134–1138. Reference Source\n\nLittle JW, Falace D, Miller C: Dental Management of The Medically Compromised Patient. Mosby: Elsevier; 2013. Reference Source\n\nKeten HS, Keten D, Ucer H, et al.: Prevalence of oral candida carriage and candida species among cigarette and maras powder users. Int J Clin Exp Med. 2015; 8(6): 9847–54. PubMed Abstract | Free Full Text\n\nSemlali A, Killer K, Alanazi H, et al.: Cigarette Smoke Condensate Increases C. albicans Adhesion, Growth, Biofilm Formation, and EAP1, HWP1 and SAP2 Gene Expression. BMC Microbiol. 2014; 14: 61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeron IH, Lima FR, Lopes AF, et al.: Resistance Surveillance in Candida albicans: A Five-Year Antifungal Susceptibility Evaluation in a Brazilian University Hospital. PLoS One. 2014; 11(7): e0158126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLyu X, Zhao C, Yan MZ, et al.: Efficacy of Nystatin for The Treatment of Oral Candidiasis: a Systematic Review and Meta-Analysis. Drug Des Devel Ther. 2016; 10: 1161–1171. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSahat JH: Warta Ekspor. September ed. Jakarta: DGNED; 2013; 3.\n\nPrima MI: Uji Aktivitas Antibakteri Ekstrak Metanol Ganggang Merah Gracilaria verrucosa terhadap Beberapa Bakteri Patogen Gram Positif dan Gram Negatif. Published Only In Database. 2012. Reference Source\n\nVimalkumar CS, Hosagaudar VB, Suja SR, et al.: Comparative Preliminary Phytochemical Analysis Of Ethanolic Extracts of Leaves of Olea dioica Roxb., Infected with The Rust Fungus Zaghouania oleae (E.J.Butler) Cummins and Non-infected Plants. J Pharmacogn Phytochem. 2014; 3(4): 69–72. Reference Source\n\nHariyanto Ih, Inarah F, Suci PR, et al.: Skrining Fitokimia dan Analisis Kromatografi Lapis Tipis dari Ekstrak Ethanol Herba Pacar Air (Impatiens balsamina Linn.). Published Only In Database. 2014. Reference Source\n\nKhan MA, Qureshi AR, Ullah F, et al.: Phytochemical analysis of selected medicinal plants of Margalla Hills and surroundings. JMPR. 2011; 5(25): 6017–7. Reference Source\n\nMetzler A: Developing a Crystal Violet Assay to Quantify Biofilm Production Capabilities of Staphylococcus aureus. Published Only In Database. 2016. Reference Source\n\nRoberts SK, Wei GX, Wu CD: Evaluating biofilm growth of two oral pathogens. Lett Appl Microbiol. 2002; 35(6): 552–6. PubMed Abstract | Publisher Full Text\n\nMubarak Z, Humaira A, Gani BA, et al.: Dataset 1 in: Preliminary study on the inhibitory effect of seaweed Gracilaria verrucosa extract on biofilm formation of Candida albicans cultured from the saliva of a smoker. F1000Research. 2018. Data Source\n\nSampaio BL, Edrada-Ebel R, Da Costal FB: Effect of the environment on the secondary metabolic profile of Tithonia diversifolia: a model for environmental metabolomics of plants. Sci Rep. 2016; 6: 29265. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSubsiha S, Subramoniam A: Antifungal Activities of Steroid From Pallavicinia lyellii, a liverwort. Indian J Pharmacol. 2005; 37(5): 304–308. Publisher Full Text\n\nCammarata A, Upadhyay SK, Jursic BS, et al.: Antifungal activity of 2α,3β-functionalized steroids stereoselectively increases with the addition of oligosaccharides. Bioorg Med Chem Lett. 2011; 21(24): 7379–7386. PubMed Abstract | Publisher Full Text\n\nMartínez A, Rojas N, García L, et al.: In vitro activity of terpenes against Candida albicans and ultrastructural alterations. Oral Surg Oral Med Oral Pathol Oral Radiol. 2014; 118(5): 553–9. PubMed Abstract | Publisher Full Text\n\nHastuti SU, Ummah PIY, Khasanah NH: Antifungal Activity of Piper aduncum and Peperomia pellucida Leaf Ethanol Extract Against Candida albicans. AIP Conf Proc. 2017; 1844: 020006. Publisher Full Text\n\nKenakin PT: A Pharmacology Primer: Theory, Application, and Methods. 2nd edition. USA: Elvesier. 2006; 36–38. Publisher Full Text\n\nAlem AM, Oteef MD, Flowers TH, et al.: Production of tyrosol by Candida albicans biofilms and its role in quorum sensing and biofilm development. Eukaryot Cell. 2006; 5(10): 1770–1779. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "34557",
"date": "12 Jun 2018",
"name": "Heni Susilowati",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis preliminary research is interesting enough to be developed but there are some things that need to be reconsidered:\nABSTRACT\n\nResearch background that written on the abstract (lines 6 and 7) does not match the purpose of the study. The sentence can be interpreted in contrast to the antifungal potential possessed by Garcinia verrrucosa. It is dubious to investigate the potency of biofilm inhbition effect if Candida albicans more tolerant to Gracilaria extract.\n\n2. METHODS\nMethods need to explain the following:\nSystemic conditions and state of teeth and oral soft tissue volunteers, The predestined determination of the Gracilaria verrucosa plant should be mentioned. Were the culture washed after the incubation period on treatment? How many times an experiment produces a representative result?\n\n3. RESULTS\nInterpretation of the results is confusing; as far as I know the higher the optical density value the more biofilms are formed. The results in Fig. 1 show that 100% and Fluconazole extracts have higher optical densities rather than the lower concentration of extracts, as far as I know this shows that the mass of biofilms formed in the group is higher. Please observe the methods and results of research reported by Sebaa et al 2016. The statistical method used was only Kruskal-Wallis, is there a multiple difference analysis? Researchers need to discuss the effect of antibiofilm extract at lower concentrations, because of course 100% extract is not a good recommendation for subsequent experimental use.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate? I cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "3878",
"date": "16 Aug 2018",
"name": "Zaki Mubarak",
"role": "Author Response",
"response": "AbstractComments from reviewer :The objective has been clarified Action done:We think aims of study still need to be statedRearrangement and sentences improvement were done in some sections of abstractIntroductionComments from reviewer :No commentsAction done:Grammar check, revise and improvement were done to provide more concise and clear sentences/statementsMaterial and methodComments from reviewer :Need clarification on sentence:“This volunteer has a bad OHIS4” (time and place)“.Then microtiter wells plate wells were washed three times with 200µl of PBS buffer and dried for 15 minutes.”(biofilm examination)“This examination was repeted three times with triplo method”( biofilm examination)Action done:Sentences improvement, grammar check and correction had been done throughout this section to provide better readability, clarity and possible reproducibilityResultsComments from reviewer :No commentsAction done:Rewritten this part due to misinterpretations we did in the first previous manuscript about which treatment gave best inhibition effect on C. albicansDiscussionComments from reviewer :No specific commentsAction done:Rewritten has been done to this part in realtion to due to changes in results presentationSentences improvement, grammar check and reorganization had been done throughout this section to provide better readability and clarityConclusionComments from reviewer :No specific commentsAction done:This part had been rewritten in relation to the revision performed in result and discussion."
}
]
},
{
"id": "34561",
"date": "19 Jun 2018",
"name": "Shahida Mohd-Said",
"expertise": [
"Reviewer Expertise Periodontology",
"natural product drug discovery"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverall fair manuscript with sound finding and relevant area of study. Can much be improved with grammar check and essential scientific writing reorganisation especially in Introduction and Discussion section. Inclusion of results for negative control (untreated biofilm) would critically improve Results presentation and appreciation of findings. Lacks relevant information on findings between smoker and non-smoker in the study that may not strongly support the conclusion on the anti-fungal effect of agent on smoker.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? I cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3877",
"date": "16 Aug 2018",
"name": "Zaki Mubarak",
"role": "Author Response",
"response": "Some improvement in grammar, sentences and writing reorganization have been done as a response to reviewer Dr Shahida M. Said. Some details in methods and analysis have also been provided to make better readibility and potential reproducibility of the work. As a result the work in this study is much better presented. These improvements can be found in track changes.As this study is still preliminary comparison between smoker and non-smoker has not been performed yet, but the finding clearly show potential of use of seaweed extract to treat oral candidiasis."
}
]
},
{
"id": "34560",
"date": "27 Jun 2018",
"name": "Cristiane Y. Koga-Ito",
"expertise": [
"Reviewer Expertise Microbiology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Mubarak et al. aimed to evaluate the effect of G. verrucosa extract on Candida albicans biofilm formation. The rationale of the study should be clearer. Also, there is lack of essential information throughout the text. The English should be revised.\n\nSpecific comments\nIntroduction The period “Smoking is a common problem in most developing countries, including Indonesia.”. A reference should be added. Also, smoking is not a problem only or mainly in developing countries. Please, consider revising. The authors stated that “This fungus is part of the normal flora of the human mouth, but it can become pathogenic in certain conditions, for example, due to nicotine exposure.”. The exposure to nicotine has been correlated with increasing in C. albicans virulence factor expression. The predisposing conditions for candidiasis are much more related to immunologic state of the host and imbalance in microflora. Please, consider revising. Revise the period “Infection with C. albicans will increase the formation of a biofilm of the fungus”, it is confusing. Revise the period “The biofilm is an extracellular matrix consisting of C. albicans colonies”, it is confusing. The authors stated that “high dosages of nystatin give gastrointestinal discomfort and increase plaque formation”. What do authors mean by “increase plaque formation”? The rationale of the study is not clear. Why did the authors select G. verrucosa? Is this plant commonly used? Why did the authors decide to use saliva from a smoker individual?\n\nMethods Revise the period “C. albicans was extracted from the saliva”, C. albicans was isolated from saliva. The authors reported that “G. verrucosa seaweed was collected from a farmer in Pulo Aceh, Aceh Province.”. More information on the plant, the exact location it was collected from, the period of the year, identification procedure (how and who did the identification?), registration in herbarium, number of voucher should be included in the text. Please, revise the period “Saliva was collected by spitting into a glass jar (15 ml), then 1 ml PBS (0.01 M, pH 7.2) was added to the jar.” Why did 1 ml of PBS added to the saliva? What was the final volume of saliva collected? Was saliva stimulated? The authors stated that “If the colour of a colony was green, this indicated that the colony was C. albicans.”. However, the color of the colony in CHROMagar is only a presumptive test and phenotypic or genotypic definitive identification should be done. The inclusion of a reference strain is highly needed.\n\nInclude the number of experiments/replicates performed.\n\nThe inclusion of more clinical isolates from non-smokers patients is needed.\n\nThe methodology of activity of extract on biofilm formation is not clear. Why and how peptone was removed from the wells? Why did the fungal suspension left in the wells for 5 min? How 100% extract concentration was obtained in the well (you already has the broth inside the well)? Why the concentration of 0.31 was chosen for fluconazole?\n\nFigure 1 should be revised. Note that optical density at 620 nm is higher after treatment with 100% extract when compared to the other concentrations.\n\nDiscussion and Conclusion sections should be revised after the revision of the aforementioned points.\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-684
|
https://f1000research.com/articles/7-1476/v1
|
17 Sep 18
|
{
"type": "Research Article",
"title": "Biomimetic remineralization of acid etched enamel using agarose hydrogel model",
"authors": [
"Sara El Moshy",
"Marwa M.S. Abbass",
"Amal M. El-Motayam",
"Marwa M.S. Abbass",
"Amal M. El-Motayam"
],
"abstract": "Background: Minimally invasive dentistry aims to prevent progression of caries and treats non-cavitated lesions through non-invasive approaches to preserve the integrity of tooth structure. The aim of this research was to investigate the possible biomimetic effect of agarose hydrogel in remineralizing a human demineralized enamel model. Methods: Mandibular third molars were distributed into three groups (G1, G2 and G3) according to the follow up time (2, 4 and 6 days respectively). Caries like lesion was prepared by applying 37% phosphoric acid gel for 1 minute and then remineralization was performed through applying agarose hydrogel on the demineralized surfaces. The specimens were placed in phosphate solution at 37˚C for 2, 4 & 6 days. Scanning electron microscope (SEM), surface microhardness (SMH) and surface roughness analysis (SR) were performed to assess the regenerated tissue. Results: SEM revealed mineral depositions on the demineralized enamel surface that increased in density by time resulting in a relatively smooth surface in G3. SR and SMH analysis revealed significant differences between the remineralized enamel surfaces of different groups (p< 0.00001) with the highest SR in G1 and the highest SMH in G3. Conclusions: Agarose hydrogel application is a promising approach to treat early carious lesion. Further studies are needed to clarify the stability of agarose hydrogels in clinical application.",
"keywords": [
"Remineralization",
"agarose",
"enamel",
"microhardness",
"surface roughness."
],
"content": "Introduction\n\nBiomimetic remineralization is a non-invasive therapeutic approach that has received great attention in the last decades. It aims to restore the dental tissues to its normal biological function and esthetics1. Although several studies have proposed different methods to remineralize enamel lesions, their clinical applications are limited because they require difficult application conditions2–5. Agarose is a natural biocompatible polysaccharide that has been proposed as a matrix for crystal formation6–9. Therefore, the purpose of this study was to investigate the possible biomimetic effect of agarose hydrogel in remineralizing a human demineralized enamel model.\n\n\nMethods\n\nThe experiment was done according to the recommendations and approval of the Ethics Committee of the Faculty of Dentistry, Cairo University for working on extracted human teeth (Approval no.18766). Mandibular third molars were collected after being surgically extracted due to impaction with patients' written consents. The roots of 47 tooth were removed using diamond disk (Komet, Rock Hill, USA, K6974) in low speed under water cooling. The crowns were divided mesio-distally and each half was embedded in self-cured acrylic resin (Acrostone Co. Cairo, Egypt, 01CCP50) exposing the uncovered enamel surface. Specimens were examined under stereomicroscope (Leica S8 APO, Leica Microsystems, Switzerland) and specimens with defects (erosions, cracks, visible stains, hypo-calcification) were excluded. Specimens were distributed into three groups (n = 31/ group), according to follow up time (Table 1). Specimens were demineralized using 37% phosphoric acid gel (Super Etch, SDI Limited, Australia, 8100040) for 1 min and rinsed with de-ionized water for 60 seconds.\n\nG1 (2 days), G2 (4 days), G3 (6 days).\n\nAgarose (Vivantis, USA, PC0701) hydrogel and phosphate solution were prepared as previously mentioned by Cao et al.,7. Agarose hydrogel was applied on the specimen using acrylic template of 2mm thickness to adjust the thickness of the applied hydrogel. After gelation of the applied hydrogels each specimen was placed into a container filled with 20 mL of phosphate solution and placed in an incubator at 37°C. The phosphate solution and the hydrogel were changed every 24 and 48 h respectively.\n\nThirteen specimens from each group were mounted on the SEM plate with electro-conductor glue (Electron Microscopy Sciences, PA, USA, 12660) to examine their surfaces. The used SEM Model was Quanta FEG 250 (Field Emission Gun) with accelerating voltage 30 K.V.\n\nSMH of 9 specimens from each group was measured using microhardness tester with Vickers diamond indenter in different areas of the specimens (Vickers diamond, 100 g, 5 s, HMV 2; Shimadzu Corporation, Tokyo, Japan). SMH was measured at baseline, after demineralization and after remineralization.\n\nSR of 9 specimens from each group was measured using digital microscope equipped with a built-in camera (Digital Microscope U500X, Guangdong, China). The microscope is connected to IBM compatible computer. WSxM software (Version 5 develop 4.1, Nanotec, Electronica, SL) was used to analyze the photos and to create a 3D image of the specimen surface. The average SR was estimated using WSxM software and expressed in µm. SR was measured at baseline, after demineralization and after remineralization.\n\nThe mean SMH values and the mean SR values were statistically analyzed. One-way ANOVA followed by Tukey's post hoc test were performed to compare remineralizing potential at different time intervals (2,4,6 days). Furthermore, the same tests were used to compare enamel surfaces within the same group. The significant level was set at 0.05. Statistical analysis was performed with SPSS 18.0 for Windows (Statistical Package for Scientific Studies, SPSS, Inc., Chicago, IL, USA).\n\n\nResults\n\nSound enamel has a smooth surface with some pits and scratches (Figure 1A, Figure 2A & Figure 3A). After acid etching different etching patterns were seen, most commonly type I and type II with scattered areas of type III (Figure 1B, Figure 2B & Figure 3B). After remineralization, G1 revealed partial occlusion of some rod cores with clearly thickened interprismatic substance (Figure 1C) while in G2 prismatic enamel configurations became hidden by mineral depositions (Figure 2C). G3 revealed a relatively smooth surface with less clearly seen rod ends. Some rods’ peripheries showed complete remineralization while others were still empty (Figure 3C).\n\nScanning electron microscope (SEM) images for G1; sound enamel at baseline (A), demineralized enamel surface (B), remineralized enamel surface (C).\n\nScanning electron microscope (SEM) images for G2; sound enamel at baseline (A), demineralized enamel surface (B), remineralized enamel surface (C).\n\nScanning electron microscope (SEM) images for G3; sound enamel at baseline (A), demineralized enamel surface (B), remineralized enamel surface (C).\n\nThe mean SMH values of enamel at different intervals (2,4,6 days) are presented in Table 2. In G1, significant differences were revealed between baseline, demineralized and remineralized enamel (p<0.05) with the highest SMH at baseline. While in G2 and G3, there was a significant difference between the baseline and the demineralized enamel (p<0.05), however there wasn’t a significant difference between baseline and remineralized enamel. Furthermore, there were significant differences among the remineralized enamel surfaces of different groups (p<0.05) with the highest SMH at G3.\n\nBaseline (B), after demineralization (D), after remineralization (R).\n\nDifferent upper and lower-case superscript letters indicate significant difference between tested groups at P<0.05. Lower case superscript letters are used for comparison within the same row and upper case letters are used for comparison within each column.\n\nThe mean SR values of enamel at different intervals (2,4,6 days) are presented in Table 3. In G1, there were significant differences between baseline, demineralized and remineralized enamel (p<0.05) with the highest SR at the demineralized enamel. While in G2 and G3, there was a significant difference between the baseline and the demineralized enamel (p<0.05), however there wasn’t a significant difference between baseline and the remineralized enamel. Furthermore, there were significant differences among the remineralized enamel surfaces of different groups (p<0.05) with the highest SR in G1. The differences in SR at baseline, demineralized enamel and after remineralization in different groups were obvious when inspecting the 3D images in Figure 4.\n\nBaseline (B), after demineralization (D), after remineralization (R).\n\nDifferent upper and lower-case superscript letters indicate significant difference between tested groups at P<0.05. Lower case superscript letters are used for comparison within the same row and upper case letters are used for comparison within each column.\n\nRepresentative Surface roughness (SR) images of enamel specimens; baseline A, demineralized enamel B, remineralized enamel surfaces C, D, E (G1, G2, G3 respectively).\n\n\nDiscussion\n\nBiomimetic synthesis of enamel like apatite structures under a physiological condition is an alternative restorative pathway10. Acid etching technique was used to mimic early enamel lesions because of the simplicity and reproducibility of this technique11. SEM results of the present study are in agreement with previous studies6–9. Agarose hydrogel acted as enamel organic matrix to control the size and form of the formed hydroxyapatite crystals through the interaction between hydroxyl group of agarose and calcium. In addition, it acts as a mineral reservoir for continuing remineralization7. The SR analysis results confirmed the SEM results, as the SR values were gradually decreased between different groups which revealed a smoother enamel surface. SMH results are in accordance with previous studies7,9. In the current work, the lower SMH than sound enamel could be attributed to incomplete compaction of formed crystals on enamel surface12.\n\n\nConclusions\n\nAgarose hydrogel model have a remineralizing potential to treat early carious lesion. Further studies are required to clarify the stability of agarose hydrogels in clinical application.\n\n\nData availability\n\nDataset 1: Raw surface microhardness (SMH) and surface roughness (SR) 10.5256/f1000research.16050.d21739813\n\nDataset 2: Raw scanning electron microscope (SEM) images 10.5256/f1000research.16050.d21739914",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nFeatherstone JD: Remineralization, the natural caries repair process--the need for new approaches. Adv Dent Res. 2009; 21(1): 4–7. PubMed Abstract | Publisher Full Text\n\nYamagishi K, Onuma K, Suzuki T, et al.: Materials chemistry: a synthetic enamel for rapid tooth repair. Nature. 2005; 433(7028): 819. PubMed Abstract | Publisher Full Text\n\nFowler CE, Li M, Mann S, et al.: Influence of surfactant assembly on the formation of calcium phosphate materials—A model for dental enamel formation. J Mater Chem. 2005; 15(32): 3317–25. Publisher Full Text\n\nChen H, Tang Z, Liu J, et al.: Acellular synthesis of a human enamel‐like microstructure. Adv Mater. 2006; 18(14): 1846–51. Publisher Full Text\n\nYe W, Wang XX: Ribbon-like and rod-like hydroxyapatite crystals deposited on titanium surface with electrochemical method. Mater Lett. 2007; 61(19–20): 4062–5. Publisher Full Text\n\nNing TY, Xu XH, Zhu LF, et al.: Biomimetic mineralization of dentin induced by agarose gel loaded with calcium phosphate. J Biomed Mater Res B Appl Biomater. 2012; 100(1): 138–44. PubMed Abstract | Publisher Full Text\n\nCao Y, Mei ML, Li QL, et al.: Agarose hydrogel biomimetic mineralization model for the regeneration of enamel prismlike tissue. ACS Appl Mater Interfaces. 2014; 6(1): 410–20. PubMed Abstract | Publisher Full Text\n\nCao CY, Li QL: Scanning electron microscopic analysis of using agarose hydrogel microenvironment to create enamel prism-like tissue on dentine surface. J Dent. 2016; 55: 54–60. PubMed Abstract | Publisher Full Text\n\nHan M, Li QL, Cao Y, et al.: In vivo remineralization of dentin using an agarose hydrogel biomimetic mineralization system. Sci Rep. 2017; 7: 41955. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBusch S: Regeneration of human tooth enamel. Angew Chem Int Ed Engl. 2004; 43(11): 1428–31. PubMed Abstract | Publisher Full Text\n\nSkucha-Nowak M, Gibas M, Tanasiewicz M, et al.: Natural and Controlled Demineralization for Study Purposes in Minimally Invasive Dentistry. Adv Clin Exp Med. 2015; 24(5): 891–8. PubMed Abstract | Publisher Full Text\n\nCao Y, Mei ML, Li QL, et al.: Enamel prism-like tissue regeneration using enamel matrix derivative. J Dent. 2014; 42(12): 1535–42. PubMed Abstract | Publisher Full Text\n\nEl Moshy S, Abbass MMS, El-Motayam AM: Dataset 1 in: Biomimetic remineralization of acid etched enamel using agarose hydrogel model. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16050.d217398\n\nEl Moshy S, Abbass MMS, El-Motayam AM: Dataset 2 in: Biomimetic remineralization of acid etched enamel using agarose hydrogel model. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16050.d217399"
}
|
[
{
"id": "38418",
"date": "20 Sep 2018",
"name": "Raneem Farouk Obeid",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGood work, but I have some comments to clarify my confusion:\nIn the Methodology: the acrylic template - why and how to use? And do you standardize the 2mm in this template?\n\nIn the Results:\n\nIn Figure 2 you mention acid etch type? Where is the reference of this classification and arrows to show the different type? Picture C is hazy, please change it. Figure 3: please put arrows to show us rod complete and empty one in Picture C.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "38419",
"date": "05 Oct 2018",
"name": "Nehad Samir Taha",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nExcellent work:\n1. The study design is appropriate to the work done. 2. Replication of this study could be applicable in the future based on the study results. 3. The statistical analysis is appropriate to the study. 4. In my opinion, the study results efficiently support the conclusions drawn.\n\nBut, I had some questions about:\n1. The acrylic templates: how to use it, and whether it is standard in size. 2. No references for the etching pattern. 3. Some pictures are hazy and with no arrows.\nFurther comments:\n1. In the Methodology: I would prefer if you could show us in one picture the differences between a defected specimen that you excluded and one that you choose.\n\n2. In the Results:\nIn Figure 1: no arrows to demonstrate the different types of acid etching. In Figure 1C: no arrows to show the closed rod cores. In Figure 2C: no arrows to show the areas of mineral deposition. No reference to the acid etching classification. Finally, I preferred to see a figure plate comparing the 3 states together to be easy to compare (i.e.: a plate of Figures 1A, 2A and 3A, a plate of Figures 1B, 2B and 3B, and a plate of Figures 1C, 2C and 3C).\n\n3. The discussion is too short to clarify the findings of the study.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "39209",
"date": "15 Oct 2018",
"name": "Mohamed Shamel",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe current study performed by El Moshy el al, is an interesting one that aims to investigate the possible biomimetic effect of agarose hydrogel in remineralizing a human demineralized enamel model.\nOverall, the study is well constructed and presented with the results efficiently supporting the discussion and conclusions.\nHowever some minor points might be helpful in adding to the study:\nSEM images used were croped from the original images which caused some blurriness in the images. A more detailed statistical analysis needs to be performed on the relatively large amount of data obtained. The discussion section needs to be more detailed as it is too short in comparison to the results obtained.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "39250",
"date": "15 Oct 2018",
"name": "Mahmoud M. Al-Ankily",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis report by El Moshy et al. examines biomimetic remineralization of acid etched enamel using agarose hydrogel model. The authors' inclusion of Agarose hydrogel model has a remineralizing potential to treat early carious lesion. The study, although it is small, adds knowledge to the existing literature.\nMinor comments would help to improve the impact of this paper:\nMethods Specimens preparation: It is better to use premolars extracted for orthodontic treatment, the surface of enamel is usually intact and standard in enamel rods orientation, crystallization and size rather than mandibular third molars with the anatomical varieties, surgical instrumentation and surface irregulars.\nRemineralization: (Agarose hydrogel and phosphate solution were prepared as previously mentioned) there is no previously mentioned information about hydrogels and Phosphate Solution Preparation.\nSurface roughness (SR) analysis: The average SR was expressed in 1 μm only. how can you be sure that you measure SR at baseline, after demineralization and after remineralization of the same 1 μm every time. It is recommended to use another type of surface roughness analysis with wider surface area at least 25 μm like AFM.\n\nResults SEM examination: Figures x3000 is not good enough to show baseline, after demineralization and after remineralization changes also x5000 is not so clear please do not minimize them to save the magnification benefits. It is recommended to use 20000 to 50000 to show the crystals, regenerated mineralized tissue and crystal orientation.\nDiscussion Needs more details about mechanism of growth of the enamel crystals and the mechanical properties of the regeneration tissue.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "39249",
"date": "17 Oct 2018",
"name": "Mahmoud M. Bakr",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study by El Moshy et al. is well designed and investigates an important topic with potential clinical applications.\nHowever, there are a few issues that need to be addressed to improve the quality of the manuscript:\nIntroduction and discussion sections are extremely short and do not cite enough literature on the topic.\n\nDiscussion section besides being short is basically a repetition of some results without discussing technical aspects of the study and comparing it to previous similar studies.\n\nThe quality of some of the images could be improved.\n\nThe statistical analysis is not a true representation of the results and could be inaccurate. The effect of time is neglected in this case. Ideally a two way ANOVA should be used to illustrate the interaction between time and treatment. If no interaction was observed then the single main effects of time and/or treatment could be reported then. Using the post hoc analysis is not the best practice as it neglects the effect of time and will most likely lead to false-positive results.\n\nSome images for illustration of the techniques used in the materials and methods section would be helpful.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1476
|
https://f1000research.com/articles/7-1338/v1
|
24 Aug 18
|
{
"type": "Software Tool Article",
"title": "FastQ Screen: A tool for multi-genome mapping and quality control",
"authors": [
"Steven W. Wingett",
"Simon Andrews",
"Simon Andrews"
],
"abstract": "DNA sequencing analysis typically involves mapping reads to just one reference genome. Mapping against multiple genomes is necessary, however, when the genome of origin requires confirmation. Mapping against multiple genomes is also advisable for detecting contamination or for identifying sample swaps which, if left undetected, may lead to incorrect experimental conclusions. Consequently, we present FastQ Screen, a tool to validate the origin of DNA samples by quantifying the proportion of reads that map to a panel of reference genomes. FastQ Screen is intended to be used routinely as a quality control measure and for analysing samples in which the origin of the DNA is uncertain or has multiple sources.",
"keywords": [
"Bioinformatics Contamination FastQC Illumina Metagenomics NGS QC Sequencing"
],
"content": "Introduction\n\nIn general, reaching sound conclusions from sequencing experiments requires the origin of a sample to be identified correctly prior to mapping. To reduce the risk of contaminants leading to incorrect inferences, it is advisable to map sequencing results against not only the expected reference genome but also against reasonable sources of contamination. Common reasons for contamination include amplifying the wrong target molecule, unwanted DNA being present in reagents used in library generation, carry-over from samples previously loaded onto a sequencing machine or sample swaps.\n\nThe tool utilises either Bowtie1, Bowtie 22 or BWA3, as preferred by the user, to map reads against pre-specified genomes. FastQ Screen presents the mapping results in both text and graphical formats, thereby allowing the user to confirm the genomic origin of a sample or identify sources of DNA contamination. The tool summarises the proportion of reads that map to a single genome or to multiple genomes. In addition, it reports whether those alignments are to a unique position, or to more than one location, within the genome of interest (Figure 1).\n\nReads either i) mappped uniquely to one genome only (light blue), ii) multi-mapped to one genome only (dark blue), ii) mapped uniquely to a given genome and mapped to at least one other genome (light red) or multi-mapped to a given genome and mapped to at least one other genome (dark red). The reads represented by blue shading are significant since these are sequences that align only to one genome, and consequently, if are observed in an unexpected genome they suggest contamination.\n\nFastQ Screen functionality is generally independent of the laboratory protocol followed and so can be used to analyse genomic DNA, RNA-Seq4, ChIP-Seq or Hi-C experiments. In addition, FastQ Screen is compatible with Bismark5, and so can also be used to process bisulfite sequence data.\n\nOther tools exist with similar functionality to FastQ Screen, most notably Multi Genome Alignment (MGA)6. FastQ Screen has a number of advantages over these tools, including directly reporting the proportion of multi-mapping reads, thereby helping identify DNA populations rich in low-complexity sequences. Another benefit of our program is the capability to create filtered FASTQ files. FastQ Screen is also the only quality control (QC) tool that aligns reads to multiple bisulfite reference genomes.\n\n\nMethods\n\nThe program utilises a short read sequence aligner to map FASTQ reads against pre-defined reference genomes. The tool records against which genome or genomes each read maps and summarises the results in graphical and text formats.\n\nWe coded FastQ Screen in Perl and made use of the CPAN module GD::Graph for the generation of summary bar plots. The software requires a functional version of Bowtie, Bowtie 2 or BWA, and should be run on a Linux-based operating system. FASTQ Screen uses Plotly to enable visualisation of results in a web browser. The tool takes as input a text configuration file and FASTQ files, which are sub-sampled by default to 100,000 reads to reduce running times, and then mapped to a panel of pre-specified genomes.\n\n\nUse cases\n\nPreliminary sequencing QC: FastQ Screen provides preliminary evidence on whether a sequencing run has been successful, as demonstrated in Figure 1, which shows results using a publicly available RNA-Seq sample (SRR5100711) labelled as mouse. The software processed the deposited FASTQ file to generate summary results in text, HTML and PNG format. As expected, the dataset contained a substantial proportion of reads that mapped only to the mouse genome, and although a sizeable proportion of reads mapped to both the mouse and rat genomes, that may have also been expected considering the close evolutionary relationship between those two species. Of concern, however, was the discovery that 11.4% of the reads mapped solely to the human genome, suggesting the sample was contaminated. This may prove problematic if human-derived reads that also align to the mouse reference genome are not removed, since differences between mouse samples may then actually reflect the variation in the degree of contamination between the samples rather than genuine biological differences. Very few reads aligned to adapter sequences which was an encouraging observation.\n\nIdentifying sample origin from a range of alternatives: FastQ Screen was recently used by researchers to identify the origin of the clothes of the Tyrolean Iceman (popularly named Ötzi), a famous 5,300 year old natural mummy discovered in 1991 in the Italian Ötztal Alps. By screening sequences against probable sources of preserved leathers, the research team showed that the iceman’s hat came from Brown Bear, his quiver from Roe deer and his loincloth came from sheep7. In a similar fashion, FastQ Screen has been used to determine the animal origin of vellum found in 13th century Bibles8.\n\nFiltering results: FastQ Screen can also be used to filter reads mapping (or not mapping) to specified genomes. This has numerous applications, most typically to remove DNA contaminants, as exemplified by a recent clinical microbial metagenomics study in which nucleic acids were extracted from porcine faeces9. FastQ Screen was then used to filter-out host sequences, and the remaining reads were then mapped, leading to the identification of over 1,600 bacterial and Archaea species and strains of virus.\n\nIn contrast, in some experiments the source of contamination may be completely unpredictable and so we have incorporated a setting in which all unsuccessfully mapped reads are written to a FASTQ format output file. This may then be used by other resources, such as BLAST, to determine the origin of those sequences.\n\n\nSummary\n\nSince its release, FastQ Screen has been used to analyse a myriad of sequencing datasets. We initially envisioned the software as a QC tool to complement our related program FastQC, but we subsequently used the software to confirm the origin of samples and added functionality for filtering FASTQ reads. The program may be used in conjunction with several common aligners, including Bismark for processing bisulfite libraries. FastQ Screen has been incorporated by other groups into bioinformatics workflows, was reimplemented in the recently released QC tool Aozan10, and is compatible with MultiQC11, a tool to aid comparison of samples with respect to a large number of QC metrics.\n\n\nSoftware availability\n\nFastQ Screen is available from: https://www.bioinformatics.babraham.ac.uk/projects/fastq_screen\n\nSource code available from: https://github.com/StevenWingett/FastQ-Screen\n\nArchived source code as at time of publication: http://doi.org/10.5281/zenodo.134458412\n\nLicense: GNU GPL 3.0",
"appendix": "Grant information\n\nThis work was supported by the the Medical Research Council (G0801156) and the Biotechnology and Biological Sciences Research Council of the UK (BBS/E/B/000C05).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank Felix Krueger who helped with making FastQ Screen compatible with Bismark and Philip Ewels who gave advice on generating the HTML format summary results. Mikhail Spivakov and Jonathan Cairns both assisted with the manuscript preparation.\n\n\nReferences\n\nLangmead B, Trapnell C, Pop M, et al.: Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009; 10(3): R25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangmead B, Salzberg SL: Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012; 9(4): 357–359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009; 25(14): 1754–1760. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoodham EF, Paul NR, Tyrrell B, et al.: Coordination by Cdc42 of Actin, Contractility, and Adhesion for Melanoblast Movement in Mouse Skin. Curr Biol. 2017; 27(5): 624–637. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrueger F, Andrews SR: Bismark: a flexible aligner and methylation caller for Bisulfite-Seq applications. Bioinformatics. 2011; 27(11): 1571–1572. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHadfield J, Eldridge MD: Multi-genome alignment for quality control and contamination screening of next-generation sequencing data. Front Genet. 2014; 5: 31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nO'Sullivan NJ, Teasdale MD, Mattiangeli V, et al.: A whole mitochondria analysis of the Tyrolean Iceman's leather provides insights into the animal sources of Copper Age clothing. Sci Rep. 2016; 6: 31279. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFiddyment S, Holsinger B, Ruzzier C, et al.: Animal origin of 13th-century uterine vellum revealed using noninvasive peptide fingerprinting. Proc Natl Acad Sci U S A. 2015; 112(49): 15066–15071. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRose G, Wooldridge DJ, Anscombe C, et al.: Challenges of the Unknown: Clinical Application of Microbial Metagenomics. Int J Genomics. 2015; 2015: 292950. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerrin S, Firmo C, Lemoine S, et al.: Aozan: an automated post-sequencing data-processing pipeline. Bioinformatics. 2017; 33(14): 2212–2213. PubMed Abstract | Publisher Full Text\n\nEwels P, Magnusson M, Lundin S, et al.: MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics. 2016; 32(19): 3047–3048. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWingett S: StevenWingett/FastQ-Screen: Release v0.12.1 especially for Zenodo (Version 0.12.1.zenodo). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1344584"
}
|
[
{
"id": "37620",
"date": "29 Aug 2018",
"name": "Russell Hamilton",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWingett and Andrews present FastQ Screen for mapping sequencing reads to multiple genomes with the goal of identifying the genome of origin. FastQ Screen is well documented, open source and freely available via GitHub and their own website.\n\nI have been using the software routinely as part of my NGS pipelines for several years and find it to be an invaluable QC tool. Sample mix ups, if from different species, are easily detected. I have also found FastQ Screen useful as a proxy indicator of rRNA contamination in total RNA-Seq, due to rRNAs being similar between related species, they show up as multi-genome aligned reads, thus identifying samples for further processing.\n\nI therefore have no reservations in recommending FastQ Screen for indexing.\n\nSuggestions:\nEach lab or facility will have their own unique requirements for genomes to screen against, but having a suggested “starter set” of genomes may ease the burden of installation / configuration for first time users. The bowtie website has some pre-made common, bowtie indexed, genomes for convenient download (http://bowtie-bio.sourceforge.net/bowtie2/index.shtml). Or more comprehensively, Illumina’s iGenome project contains a wide range of bowtie/bowtie2/bwa indexed genomes (http://support.illumina.com/sequencing/sequencing_software/igenome.ilmn).\n\nOne of the most useful features of FastQ Screen is its compatibility with MultiQC, where multiple samples are plotted together for assessment of entire sequencing runs or batches. Mentioning this in the documentation would alert users to this very useful feature.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3979",
"date": "17 Sep 2018",
"name": "Steven Wingett",
"role": "Author Response",
"response": "We agree with the excellent suggestion to create pre-built genomes for users. Indeed, the latest version of FastQ Screen (v0.13.0) now has a new option (--get_genomes) which instructs the script to download commonly used pre-built Bowtie2 reference genomes deposited on the Babraham Bioinformatics website. Along with the reference genomes, FastQ Screen also downloads a configuration file, which it subsequently edits to list the full path of the downloaded genomes as stored on the user’s machine. This setup should be ready-to-use and the selection of genomes should suit most requirements. We have updated the documentation to alert users that our tool is compatible with MultiQC."
}
]
},
{
"id": "37624",
"date": "04 Sep 2018",
"name": "Ian J. Donaldson",
"expertise": [
"Reviewer Expertise Genomics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFastQ Screen by Wingett and Andrews is a tool to map a sample of sequenced reads against a panel of reference genomes.\n\nThe tool is comprehensively documented and is available from the authors' web site and via Github.\n\nI have personally used this tool since 2011 and it is incorporated in the quality control pipeline for our core facility on all sequencing runs. The tool has been used to detect contamination from a panel of commonly used genomes, and to estimate the contribution of sequence from mixed genome samples. It is also used to detect rRNA contamination in RNA-seq protocols. Recently the ability of FastQ Screen to filter individual reads has been incorporated into our single cell analysis pipeline.\n\nThe article is clear, well explained, and gives interesting use cases.\nMinor correction: In the legend of Figure1 - 'Reads either i) mappped' should be 'Reads are either i) mapped' 'ii) mapped uniquely' should be 'iii) mapped uniquely' '(light red) or multi-mapped' should be '(light red), or iv) multi-mapped'\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3980",
"date": "17 Sep 2018",
"name": "Steven Wingett",
"role": "Author Response",
"response": "Thank you for your comments and we are pleased that you find FastQ Screen useful in your research. We have updated the manuscript to correct the typographical errors."
}
]
},
{
"id": "37622",
"date": "06 Sep 2018",
"name": "Stéphane Le Crom",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWhen dealing with multiple high throughput sequencing experiments, especially for core facilities, you need to pay great attention to quality controls. Contaminations from different species you are working with are one of the potential problems you can encounter. The FastQ Screen software as been designed by Steven Wingett and Simon Andrews in order to solve this drawback. Using different mapping softwares, FastQ Screen allows to identify from the reads present in your samples the different species they came from. The graphical and text outputs provided, detailed information on the potential level of contamination obtained. The software objectives are clearly explained just as the way to use it and how to interpret its outputs. FastQ Screen source code is available through GitHub and a documentation is provided on the authors’ website.\n\nThis software is publicly available since several years and is used today by many genomics laboratories. The tool is very stable, the command line help is easy to understand and we not found any issue when we launch it in all our tests. Finally, the output report is informative and very clear.\n\nFastQ Screen is a must have tool for everyone working with multiple species samples or who want to prevent unpredicted contamination of its samples.\n\nRemarks 1. In order to more clearly explain the way FastQ Screen is working, more information should be provided on pre-defined reference genomes. How can it be chosen before running FastQ Screen. Is there a limitation among the number of genomes selected? Is there a significant impact on software running time according to the number or size of reference genomes selected? Is there a list of already available pre-defined reference genomes? Does it works using subset sequence database or with the whole genome?\n\n2. It could also be interesting to get some information about the running time of several analyses. What are the specifications required for the computer needed to run FastQ Screen? I didn’t findd this information through the documentation.\n\n3. It seems that FastQ Screen when processing large dataset (FASTQ files more than 100 million reads) use a large amount of memory as it store the identifier of each read. It may be useful to advise users in documentation about this issue.\n\nMinor remarks 1. For future evolution of FastQ Screen software it could be interesting to provide a clue for the “No Hits” reads obtained. In the paper the authors suggest to run BLAST analyses. Perhaps there is a way to use a subset of “NoHits” reads in order to guess where the possible contamination is coming from?\n\n2. The link to the “Babraham Bioinformatics download page” in the documentation “https://www.bioinformatics.babraham.ac.uk/projects/fastq_screen/_build/html/index.html#download” is not working\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3981",
"date": "17 Sep 2018",
"name": "Steven Wingett",
"role": "Author Response",
"response": "Thank you for your detailed feedback. Both you and the reviewer Dr Hamilton pointed out that FastQ Screen would be better served if we made pre-made genome indices available. As described in our response to Dr Hamilton, FastQ Screen now has a --get_genomes option to obtain Bowtie2 genome indices. We have also provided more information in the documentation (https://www.bioinformatics.babraham.ac.uk/projects/fastq_screen/_build/html/index.html), explaining how a user may create the desired aligner index files.We now also mention in the documentation that FastQ Screen has an upper limit to the number of genomes that may be used. This value, which is 32, is the result of how the script records read/genome mapping data as a 32-bit variable. This limit far exceeds every task we have performed with FastQ Screen. Should more genomes be required at a future date however, we would be able to modify the code to increase the maximum allowed value.The time taken to process a dataset varies substantially depending on the input data and specified parameters. For example, larger files take longer to process than those with fewer reads. Similarly, screening against more genomes will increase processing times, as will the complexity and size of the genomes to which the aligner maps FASTQ reads. Using FastQ Screen in the default quality control (QC) mode, in which only a subset of the data is processed, is significantly quicker than using the tool to filter a dataset. Screening bisulfite libraries is also a more computationally intensive task and therefore takes significantly longer to complete. In contrast, times may be reduced by muti-threading submitted jobs. The hardware of a system will of course also substantially impact running times, as will the competition for system resources from the jobs being run concurrently with FastQ Screen.There are similar points to consider when evaluating memory overheads. Most notably, running the program in QC mode will require substantially less memory than filtering a dataset. The software needs to hold in memory whether a read maps to any of the reference genomes. In QC mode, this will only be necessary for approximately 100,000 reads, but when filtering this will be required for every read, simultaneously, in the FASTQ file – and FASTQ files may comprise hundreds of millions of reads.To help the user make sense of these considerations, we have now included in the documentation a report of the memory requirements and time taken to process different data files, using different parameters. Obviously, it is impossible to cover every scenario, but as a general rule using the tool to QC a dataset should take minutes whereas filtering a large dataset may take several hours.We added extra information in the documentation pertaining to the system requirements necessary to run FastQ Screen. To summarise these, FastQ Screen should be run on an up-to-date Linux operating system that has Perl installed and has a working version of either Bowtie, Bowtie2 or BWA installed.So far as we can tell, the following web link is functional: https://www.bioinformatics.babraham.ac.uk/projects/fastq_screen/_build/html/index.html#download. In case there is any confusion, this is an anchor to the “Download” section in the documentation and not a link to the software download page.Thank you for bringing to our attention the need for follow-on support for reads that map to no genomes (extracted when using the --no_hits parameter). We intend to address this in future FastQ Screen releases by adding new functionality to the software."
}
]
},
{
"id": "37623",
"date": "17 Sep 2018",
"name": "Matthew D. Teasdale",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper Wingett and Andrews describe FastQ Screen a program for quality control and source species identification. The paper is well written with clear example use cases and I am very happy to recommend FastQ Screen for indexing.\n\nI have personally used FastQ Screen for over 6 years and now consider it to be an essential part of my analysis pipelines. The documentation for the program is excellent and it is under active development.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3982",
"date": "17 Sep 2018",
"name": "Steven Wingett",
"role": "Author Response",
"response": "We are delighted that FastQ Screen has proven useful in your research and we hope it will remain part of your analysis pipeline as we add new features to the software."
}
]
}
] | 1
|
https://f1000research.com/articles/7-1338
|
https://f1000research.com/articles/7-1471/v1
|
17 Sep 18
|
{
"type": "Research Article",
"title": "Dose response and working memory limit in an eye movement desensitisation and reprocessing prospective case series",
"authors": [
"Alan Hassard",
"Heather Turner",
"Kathryn Smith",
"Heather Turner",
"Kathryn Smith"
],
"abstract": "Background: Eye movement desensitization and reprocessing (EMDR) is a psychological therapy for post-traumatic stress disorder, or any disorder where the patient reports distressing imagery. We report here a prospective case series to test the prediction that the average number of distress images tends to seven. Methods: Patients in a sexual health clinic were offered EMDR treatment. In total, 130 were entered and 50 completed treatment. All distressing images to all bad life events and anxieties reported were treated. Images that caused high distress or stopped progressing were usually decomposed until progress resumed. Results: The median number of images per patient was seven. This required three treatment sessions in a total of five appointments, on average. This result was replicated twice in separate retrospective case series. Conclusion: We propose that EMDR works by unloading an overloaded memory buffer. If this bandwidth is liberated by treatment, this permits the cognitive and emotional change observed in EMDR treatment. The tendency to seven may signal involvement of the working memory limit. This approach enables clinical decision making and gives common ground with other psychotherapy methods.",
"keywords": [
"Eye Movement Desensitization and Reprocessing",
"Dose dependency",
"Working memory",
"Clinical decisions",
"Seven"
],
"content": "Introduction\n\nEye movement desensitisation and reprocessing (EMDR) is a psychological therapy, originally for post traumatic distress disorder (PTSD), but generalizing to any patient who reports distressing imagery1,2. The image is specified with negative cognitions and distressing physical sensations. The patient is then taken through a procedure based on series of sets of 25 eye movements with the guidance of the therapist. The patient will report the image is fading or changing in some way, the negative cognitions improving and the distress reducing. The cognitions and distress improve without necessarily being addressed by direct verbal methods. Other methods of sensory stimulation have the same effect. There is no explanation of how this procedure works. An explanation is required to inform clinical judgements and to permit the design of more accurate trials.\n\nOn the basis of clinical experience, we hypothesize that EMDR is superior to cognitive behavioural therapy (CBT) because it is faster. If a treatment can be accelerated with equal benefit, then more patients could be treated within a given budget. One premise of a controlled trial is that both control and experimental group receive equivalent treatment, except for the variable being investigated. In psychological trials, this requires that both groups should receive an equal treatment time. The published trials do not always balance the number of sessions or treatment time. Consider a trial in which EMDR is compared to CBT over ten weekly appointments of one hour. CBT includes homework, such as listening to audio files of the trauma event, or real life exposure when practical. Assume this homework takes one hour per day, happens each of the six days the patient does not attend an appointment and is performed with perfect compliance. Therefore, EMDR clients are getting 10 hours of treatment, while CBT clients are getting 70 hours. If the result shows equal results for each therapy, then we should conclude that EMDR is seven times faster than CBT and avoids the problem of homework compliance. Homework compliance is associated with improvement in CBT for PTSD3. Some trials address this problem4 and some do not (e.g 5). For a recent review see 6.\n\nTo prove EMDR is faster than CBT, patients in a controlled trial should receive an equal number of sessions. Since we could not do a trial in our circumstances, we investigated if the number of EMDR sessions required is limited by the number of distress images. In our clinic, eye movement therapy has been used since 1991 and was based on Shapiro’s original reports1,2,7 Clinical experience showed that EMDR treatment required treatment of around seven images. We hypothesised that the average number of distress images in an EMDR series would be attracted to seven, plus or minus two, images. A retrospective investigation of 400 case files discovered an average of 5.5 distress images8. To investigate this, we counted the distress images in a prospective series and collected demographic and evaluation data.\n\n\nMethods\n\nThis study was approved by the Plymouth Local Ethics Committee (number, 1743). All patients gave written informed consent to enter the study. The original analysis plan, using non parametric, required 50 patients to complete. However, this plan was superseded by the analysis described below. The patients were recruited between September 2001 and March 2007.\n\nPatients in the Genitourinary Medicine clinic (GUM) were offered EMDR whenever distressing images were reported. No a priori judgements of single trauma event, multiple events or complexity of case were made. Patients were recruited according to the following inclusion criteria:\n\n1. They reported a defined traumatic event or history;\n\n2. English was their first language;\n\n3. They were not receiving any concurrent psychological therapy or counselling. If they were, they were willing to suspend this during EMDR treatment;\n\n4. They were not suffering any concurrent bad life event or illness;\n\nPatients were disqualified from the study under the following circumstances:\n\n1. They suffered a bad life event during treatment. For example, a serious illness suffered by themselves or family member;\n\n2. Any increase in medication. If medication levels were stable, or decreasing, they were retained;\n\n3. They requested to leave.\n\nA disqualifying event was any active event. For example, a patient with stable diabetes was entered, but a patient who was diagnosed with diabetes the week after entry was removed from the study. Removed patients continued with EMDR or other treatment or help as required. Most had a history of sexual health problems that bought them into GUM in the first place. Some patients were the marital or sexual partners of GUM patients, but without primary sexual health concerns. The following information was collected for each patient: sex, age, marital status, employment status, initially presented trauma, how long since that trauma, any significant mental health or addiction issues, presence of panics, total contact hours and number of EMDR treatment sessions.\n\nEMDR treatment reduces clarity and perception of distress images and consequently the distress score reduces.\n\nWe recorded the number of distress images reduced to a low distress score that remained stable on retesting in subsequent treatment sessions (designated “F”). Distress was measured with the Subjective Units of Disturbance score from zero to ten and the end point was taken as zero, one or two. The Foa PTSD questionnaire was used to measure initial severity9.\n\nTreatment progress was measured with three questionnaires: the Impact of Events Revised, (IOE-R), the General Health Questionnaire (GHQ-12), and the Posttraumatic Cognitions Inventory (PTCI). The Impact of Events revised version was used, which matches the later definition of PTSD10,11. A drawback of the IOE-R is that it only refers to one bad event. We intended to deal with all reported events. Patients who reported more than one were asked to consider the initial event. The GHQ-12 is a psychiatric screen12. The PTCI measures cognitive symptoms and was chosen to exclude imagery or physiological arousal13.\n\nAll patients received EMDR from the clinical psychologist (AH). This psychologist had 10 years’ experience with EMDR before the study began and was level one and level two trained. EMDR, as used here, was based on the original protocol1 and resembles the “eye movement desensitization” protocol that can be found in Chapter 9 of the third edition of Shapiro’s manual7. However, it does not include all eight stages later described by Shapiro, elsewhere in this book.\n\nBody scan, re-evaluation, debriefing and closure were employed when required. Three parts of the eight-phase EMDR procedure were omitted. The safe place procedure and the “cognitive interleave” were never required. If distress stopped progress, the image was reassessed or decomposed to smaller components. At a certain point, the treatment would start reducing distress again. If the patient was distressed at the end of the session, a brief relaxation was taught. The patients reported cognitive changes during the procedure and the PTCI was intended to measure this. The “positive cognitions” procedure was not used. Here, patients report what positive belief they would like concerning themselves and rate it on a one to seven validity scale. This is then used for “installation” of positive cognitions. The therapist dropped this because switching from discussing the bad event to discussing what the patient might think in an ideal world was confusing and distressing for the patient14. It became redundant since patients would report positive thoughts when treatment continued through the series of distress images.\n\nTreatment sessions began by revisiting relevant issues. All distressing images collected or treated in previous sessions were reassessed. The EMDR was then started. The starting rule was expressed as follows: “We will start on the highest scoring image, because that probably makes the procedure shorter in the long run. If you are not ready to start there, please choose somewhere else to start”. The patient was asked to attend to the chosen image and nod when ready. He or she was then taken through sets of 25 eye movements. A stick 70 cm long, with a coin at one end as a visual target was used to guide the eye movements. If the patient reported a problem with eye movements another method of sensory stimulation was used. This was auditory or tactile stimulation using the (www.neurotekcorp.com). This change was judged as a clinical decision and no attempt made to balance it between patients. After each set, the patient was reassessed. When a new image was reported, it was treated or recorded for later treatment.\n\nIf the patient became distressed, treatment was continued until the distress reduced. If the distress did not reduce after four sets of eye movements the image was changed in some way. The first move was to enquire if there was a more distressing image, negative thought, emotion or physical sensation or pain. Another option was to move to an alternative image that did reduce and return to the difficult image later. The best option was to decompose the target image in some way. One method was to reduce that part of the timeline the patient was focusing on. For example, the event would be divided up into smaller stages of the sequence. The second was to divide up into sensory modalities. For example, a sexual assault could be divided up into the physical pain images, the sound of the assailant’s voice and the feeling of the weight of their body. Each sensory image was treated individually and included in the “F” count. This course of action generally resulted in effective treatment.\n\nAll reported distress images were treated. This included any anxiety or distress trigger that could be elicited as an image and scored, not just the initial presented event. EMDR targets also included images of future events (“flashforwards”). Other images were imagined or symbolic representations of events where the patient had been absent or unconscious. Some images were general (semantic) memories, not episodic. For example, a repeated act of child abuse would be reported as one image, not individual episodes with time tags. Such semantic images might decompose to episodes with treatment, but could be treated without this happening. One strategy to use such semantic memories for desensitization was to ask the patient to imagine the face of the assailant or abuser, then describe and score bad emotions.\n\nThe patient was briefed on the procedure. Any problems or limitations of EMDR specific to that patient were discussed. The patient was informed of the seven image average to achieve informed consent, but it was emphasized that this was an average. The patient could report more or less than seven. It was made clear that treatment would continue until all reported distressing images, of whatever origin, were treated. The initial assessment explicitly included listing all trauma life events and distress triggers. The patient gave signed consent to the project as required by the local ethical committee.\n\nEach patient received the IoE-R, GHQ12 and PTCI questionnaires on each treatment appointment. On the first treatment appointment, the Foa questionnaire was also given. Questionnaires were completed on arrival, before the appointment began.\n\nIn the first session, the EMDR procedure began at an agreed starting point. In the second and subsequent appointments, the patient was reassessed for each image collected in previous sessions. The patient visualised the image, reported if that was impossible, difficult or easy, reported the distress score and then anything else relevant. Treatment then continued, as described above. Treatment sessions continued until all images were desensitised to a distress score of zero, one or two. The number of images desensitized was recorded on each occasion (“F”). The value of F recorded in the first treatment session was zero.\n\nIt was not expected that EMDR treatment would be sufficient for all the needs of all the patients or that all patients would complete treatment. Patients in this clinic have a high drop-out rate from medical treatment. We investigated if there were any systematic differences between completers and dropouts, using the data collected. A record was kept of all patient outcomes or destinations.\n\nSix months after treatment ended, all 50 patients who completed treatment were contacted by mail. The letter contained a list of recorded distress images and the three questionnaires. The patient was requested to score all items and return by stamped addressed envelope. In total 31/50 scores were returned.\n\nAnother retrospective count of distress images was included in an audit of appointment attendance. This was a “file drawer” sample, to replicate this result. In total 200 cases were audited from GUM, of which 167 (83%) were EMDR cases. These cases were collected between November 2010 and June 2012.\n\nA second retrospective series was collected at GUM during a period when it appeared that some severe cases were showing high numbers of images. The idea was to test the resilience of the hypothesis that the average would tend to seven. These cases were collected between November 2015 and July 2017.\n\nThese two series were taken from routine audit databases for which ethical permission is not required by local rules. The three case series reported were all separate, with no overlap of cases.\n\nData was analysed using R version 3.4.1. R code used in the analysis can be found in Dataset 115.\n\n\nResults\n\nTo acquire 50 patients who finished treatment, it was necessary to recruit 130 patients, giving a fail to complete number of 80 (62%). There were 104 females and 26 males. The average age was 33 years. A total of 85 (65%) were employed, 25 unemployed, 14 students and 6 retired; 67 (51%) were single, 54 (41%) were married or co-habiting, the 9 remaining were divorced. They were all white British, except one Irish female and one white South African male. These patients were entered into the study between September 2001 and March 2007.\n\nThe initially reported trauma event occurred at a mean of 12 years previously. Forty (30%) patients reported medical problems, including 3 stable diabetics. Thirty-nine (30%) reported another mental health or addiction problem, of which 32 reported depression and 12 reported panics; Twenty-four required medication for that problem. Initially presented trauma events, by sex, are shown in Table 1 for all 130 patients. Most patients were sexually assaulted females. No distinction was made between different types of childhood abuse. Sexual misadventure means any such trouble that was not sexual assault. See Dataset 217.\n\nThe 50 patients who completed treatment required a mean of 3.8 treatment sessions (mode = 3.0). A treatment session was one hour long. The mean number of contact hours required per patient was 5.04 (mode = 4.0). This includes assessment, treatment sessions, follow-up and time required for other matters.\n\nConvincing improvement in score results are required to validate the results below. 50 cases completed treatment, resulting in 49 complete before and after pairs. There were 31 complete follow-up scores, which were paired with the respective after scores. Good score decreases were observed for all three questionnaires, all of which had low probability values. All score changes and statistics are shown on Table 2–Table 4. Follow-up gave stable values for the IoE-R scale and PTCI. Note the PTCI decreased without any items concerning imagery. However, the GHQ-12 follow-up scores increased, with statistical significance. Raw data is available in Dataset 318 with analysis programmes in Dataset 115.\n\nScores summarized as medians, for First, Last and Follow-up (FU). There were 48 pairs First v. Last and 31 pairs for Last v. Follow-up. Z = Z score; number of standard deviations difference. W = Wilcoxon signed-rank test value. P = Probability median scores are different by chance; values are multiplied by 2, to adjust for multiple comparison, by Bonferroni correction. C = Common language effect size20; this is the probability that any initial score will be different from any subsequent score15.\n\nScores summarized as medians, for First, Last and Follow-up (FU). There were 48 pairs First v. Last and 31 pairs for Last v. Follow-up. Z = Z score; number of standard deviations difference. W = Wilcoxon signed-rank test value. P = Probability median scores are different by chance; values are multiplied by 2, to adjust for multiple comparison, by Bonferroni correction. C = Common language effect size20; this is the probability that any initial score will be different from any subsequent score15.\n\nScores summarized as medians, for First, Last and Follow-up (FU). There were 48 pairs First v. Last and 31 pairs for Last v. Follow-up. Z = Z score; number of standard deviations difference. W = Wilcoxon signed-rank test value. P = Probability median scores are different by chance; values are multiplied by 2, to adjust for multiple comparison, by Bonferroni correction. C = Common language effect size20; this is the probability that any initial score will be different from any subsequent score15.\n\nThe hypothesis was that the average number of distress images would approach seven. The frequency distribution of F is shown in Table 5 (Dataset 419). The mean value for the 50 patients is 7.6 images. The median was seven and the mode was seven. Figure 1 shows the case series of the 50 completed cases as a “law of large numbers” graph. This is a plot of progressively calculated running means on the Y axis, ascending the case series on the X axis. This shows that after about 20 cases, the mean number of distress images is between seven and eight, close to the hypothesised value. Since the data are not a random sample, we cannot assume a conventional statistical model to test this result. However, having shown the average tends to seven, we need a way of assessing and testing variation in this value.\n\nF = number of distressing images, treated per patient. Mean = 7.62; Mode = 7; Median = 7.\n\nFor 50 cases in prospective series. Dotted lines show 95% percentile bootstrap confidence intervals.\n\nWe can investigate this with bootstrapping21, which simulates repeating the study many times to give an empirical distribution for F. In this case, a sample of 50 cases was taken from the data, at random, with replacement of each value, so there is always 50 cases to sample from. The sampling was repeated 1000 times. Each sample of 50 generates a mean value and consequently confidence intervals can be found for true mean of F. An R program was written to achieve this16,22,23. We used percentile confidence intervals, so the 95% boundaries take the value of the 25th lowest and highest means from the 1000 iterations. This program will give slightly different results each run, but example values are 6.68 and 8.90, which accommodate the hypothesised value and suggests that in groups of 50 patients we would expect the average number of distress images to be between six and nine. The bootstrap confidence interval is skewed right, reflecting that a small number of cases with a high number of distress images have a big influence on the mean in this relatively small group size. The bootstrap values and graphs are in Dataset 217 (RunningMeans&PlotFig1to4.R). The intention to treat value of “F” was determined from the 99 cases who completed at least two sessions and recorded at least one F value. This includes the 50 cases discussed above. The mean was 6.5 and the mode and median both 6.0.\n\nThe first retrospective series contained 167 cases (Figure 2). The sample mean is 7.17. Using the same R program, 95% percentile boundaries are 6.57 and 7.86. As we would expect with a larger group size, there is less variation in the mean and the bootstrap confidence interval is more symmetric about the sample mean (Dataset 419). The second retrospective series contained 106 cases (Figure 3). It was biased to start with high F values of 25, 20, 16, with a minimum value of 1. The mean was 8.5, with 95% percentile boundaries of 7.5 and 9.5. This shows increase of the mean with some severe cases in the series (Dataset 419). The two retrospective series included complete cases and those who did not complete, but were considered successful before they failed to attend. If all 323 cases reported here were combined, the mean was 7.68 inside boundaries of 7.20 and 8.20 (Figure 4). This estimates the long-term average of distress images for populations similar to that attending the clinic.\n\nFinal mean = 7.17.\n\nFinal mean = 8.50.\n\nFinal mean = 7.68.\n\nTable 6 shows the destinations of the 130 patients recruited. Nine patients left the project, but continued treatment, after another life event. EMDR was not sufficient for six patients who required further cognitive therapy (from the same therapist). Twelve attended assessment, consented to the trial, then did not attend treatment (DNA). Patients who dropped out of treatment were divided into those who attended at least two sessions and therefore contributed data, and those who attended once. Questionnaire and other data showed improvement in the drop-outs, not discussed further for reasons of space. Six patients decided to leave EMDR treatment for another therapist. The 50 finishers were compared with the 46 who started treatment but did not finish and for whom all data was collected. No significant differences were detected for any collected parameters.\n\nEMDR, eye movement desensitisation and reprocessing.\n\n\nDiscussion\n\nThe median number of distress images was seven. We defined and counted the unit “distress image” as that which reduces in distress when treated with eye movements. This definition is validated by the reduction in questionnaire scores. EMDR is considered as a titration of eye movements against distress. Distress memory images can be episodic or semantic. The “distress complex” is the collective noun for those distress images reported by one patient. Three qualifications are required.\n\nOnly data from one EMDR therapist is reported. This result requires replication with other therapists, following the same procedure. The two retrospective replications gives some mitigation for this issue. We cannot show replication with other therapists, because only one was available, but we can replicate within one therapist, across time. Second, the patients were informed of the expected seven image average. This raises the possibility that the seven is an artefact. We considered it necessary, and ethical, to inform them of this to achieve informed consent. It was emphasized that seven was an average and the patient should report all images, to all bad life events or distress triggers. The good before and after scores support the position that all images were reported and treated. Third, this was an uncontrolled case series, designed to observe the natural history of treatment to test a hypothesis. These limitations are dictated by the circumstances of an acute sexual medicine clinic, where a fast turnover of patients is required and where there is only one psychologist. Legal and practical difficulties limit communication to patients.\n\nSubject to these issues, we can assert that for the typical population attending the clinic, the distress complex tends to contain seven, plus or minus two, distress images. This is demonstrated by the prospective case series that had a median of seven images. The two retrospective series also stabilised in this zone. These results are similar to those in a previous report9. The largest retrospective series (of 167) had a mean of 7.17 (with near-symmetric bootstrap 95% interval of 6.67–7.86). In the second retrospective series starting with more severe cases, the mean rises to 8.5. Combining all 323 cases moves the mean back towards seven, which we can consider a stable value in the long term. This observation may be sensitive to variation in patients and method. This demonstrates that the EMDR system is dose dependant, because sufficient EMDR is required to desensitize the whole distress complex. Any trial that compares eye movement or sensory stimulation methods with CBT must ensure that the amount of treatment time is equal across groups and is sufficient to desensitize the distress complex. Commonly, such patients have collected more than one trauma event. This issue is rarely addressed in published trials. If only the worst reported life event is treated, then other distress triggers may be missed. This might limit longer term results. This conclusion may apply to all psychological therapies.\n\nSimilar numbers are reported from similar patients. Such similarities are worth noting, but should be considered with due caution since there are differences of definition and method in each report. First, the Rothbaum et al. study treated one distressing memory, but reported discovering an average of six others in both patient groups5. This is a total of seven. Second, a trial of EMDR treatment reported that patients were treated until they no longer had PTSD. This required treatment of six trauma memories24. Third, Holmes et al.25 counted the number of “hotspots” in a sample of patients in treatment and discovered an average of six, in the context of an average of four intrusive images of different contents.\n\nObserving this attraction towards seven raises the possibility of involvement of the working memory. Seven is identified as the approximate number of information chunks that can be held in the working memory26. If it is assumed that a distress image is a chunk, then perhaps this is a signal from the working memory limit. Working memory capacity is a complex subject, which we cannot simplify here. For reviews see Cowan27, Jonides et al.28, and Richardson29. It might be a safer initial claim that EMDR therapy works by unloading a limited capacity memory buffer, without specifying further. PTSD and similar disorder is caused by overloading this buffer.\n\nA possible model to explain the unloading is “reverse learning”, originally postulated by Crick & Mitcheson30. The speed of the EMDR effect implies a physiological explanation9. The idea of unloading memory buffers enables several points to be made. Firstly, this is a good explanation for the patient, since such overload is what chronic distress disorder feels like. Second, this enables clinical decisions in the treatment session. For example, on occasion, the patient becomes so distressed, or affected in some other way, that the treatment process stalls. This might mean the memory buffer capacity is saturated. Therefore, the EMDR target image should be progressively decomposed into smaller images, as described above, until distress starts to reduce again. At this point both the EMDR responsive image and the available memory processing bandwidth are defined. An analogy is threading a needle. A thread will not go through the eye of the needle if it is too big, so the size of the thread is reduced until it passes through. This liberated bandwidth enables further synergistic progress. We can now explain the cognitive and emotional change caused by the EMDR procedure. As memory bandwidth is liberated, the patient becomes progressively more able to think rationally. Distress is reduced to zero, or that level permitted by circumstances.\n\nCan anything be said about this memory buffer? First, there must be two limited capacity systems involved. The first contains the whole distress complex and averages seven images. However, when treating each image, we presumably have placed that image into a second buffer, which only holds one. These might be working memory and focus of attention, respectively. It is established that eye movements affect working memory31–33. If EMDR affects working memory capacity, then this is common ground with other therapeutic methods, such as mindfulness training34, and counting during recall35, where similar claims have been made.\n\nThe observation of a limit gives an approach to dealing with serious, or complex, cases. Using the distribution in Table 5, 96% of cases report up to 13 distress images. Therefore, only 4% of EMDR cases go to 14, or above. Such cases are either serious, or the therapist should consider if the limits of EMDR treatment have been reached and for what reason. Reaching image number 14 requires diagnosing between, for example, a child sexual abuse case with panics, in which case continue EMDR, or another issue which may require reconsidering EMDR. When a history of childhood abuse is reported, the fact that seven is average and fourteen is rare, enables patient and therapist to define the task. Patients may ask if they will require other psychotherapeutic help after EMDR. The answer can be given that, if this is required, they will benefit more from other methods once EMDR has unloaded the working memory up to this limit.\n\n\nConclusions\n\nBoth the observation of the seven images and the speed of treatment are achieved by this minimalist protocol, with decomposition of obstinate flashbacks. This probably accelerates treatment. Only an average of three treatment sessions and five in total were required per patient. It is likely that this version of the EMDR procedure permits the observation of the distress image horizon. With EMDR using the eight stage sequence, difficulties are solved by methods borrowed from cognitive therapy, or evaded by a retreat to the imagined safe place. In our version of eye movement or sensory stimulation therapy, difficulties are usually solved by decomposing the image into smaller elements that can be individually treated. Reducing an unresponsive distress image into smaller units permits us to continue treatment and define the units by their response to treatment. CBT does not permit this, especially if the distress complex is committed to audio or video file for self-exposure. The theory that it is necessary to match the image with some limit of memory processing bandwidth, which shows one sign of working memory, enables an account that facilitates clinical decisions and may improve trials.\n\n\nData availability\n\nF1000Research: Dataset 1. R program files16, requiring R.app or RStudio.app (on Mac) or equivalent to run (cran.r-project.org)., 10.5256/f1000research.15648.d21718315\n\nF1000Research: Dataset 2. Demographic data for participants., 10.5256/f1000research.15648.d21718617\n\nF1000Research: Dataset 3. The before, after and follow-up scores for the General Health Questionnaire (GHQ-12), the Impact of Events scale (IoE-R) and the Post Traumatic Cognitions Inventory (PTCI)., 10.5256/f1000research.15648.d21718418\n\nF1000Research: Dataset 4. \"F\" values for the initial prospective 50 cases and then the two retrospective series of 167 and 106 cases., 10.5256/f1000research.15648.d21718519",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThis report has benefited from advice from David Mulhall. A preliminary version of this project was reported at the ninth EMDR European Conference in London, 2008.\n\n\nReferences\n\nShapiro F: Eye movement desensitization: a new treatment for post-traumatic stress disorder. J Behav Ther Exp Psychiatry. 1989; 20(3): 211–217. PubMed Abstract | Publisher Full Text\n\nShapiro F: Efficacy of the eye movement desensitization procedure in the treatment of traumatic memories. J Trauma Stress. 1989; 2(2): 199–223. Publisher Full Text\n\nMarks I, Lovell K, Noshirvani H, et al.: Treatment of posttraumatic stress disorder by exposure and/or cognitive restructuring: a controlled study. Arch Gen Psychiatry. 1998; 55(4): 317–325. PubMed Abstract | Publisher Full Text\n\nNijdam MJ, Gersons BP, Reitsma JB, et al.: Brief eclectic psychotherapy v. eye movement desensitisation and reprocessing therapy for post-traumatic stress disorder: randomised controlled trial. Br J Psychiatry. 2012; 200(3): 224–231. PubMed Abstract | Publisher Full Text\n\nRothbaum BO, Astin MC, Marsteller F: Prolonged Exposure versus Eye Movement Desensitization and Reprocessing (EMDR) for PTSD rape victims. J Trauma Stress. 2005; 18(6): 607–616. PubMed Abstract | Publisher Full Text\n\nMoreno-Alcázar A, Treen D, Valiente-Gómez A, et al.: Efficacy of Eye Movement Desensitization and Reprocessing in Children and Adolescent with Post-traumatic Stress Disorder: A Meta-Analysis of Randomized Controlled Trials. Front Psychol. 2017; 8: 1750. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShapiro F: Eye Movement Desensitization and Reprocessing Therapy. Third Edition. Wiley, Third Edition. The Guilford Press, 2018. Reference Source\n\nHassard A: Distribution of targets in 400 eye-movement desensitization cases. Psychol Rep. 2003; 92(3 Pt 1): 717–722. PubMed Abstract | Publisher Full Text\n\nFoa EB, Cashman L, Jaycox L, et al.: The validation of a self-report measure of posttraumatic stress disorder: the posttraumatic diagnostic scale. Psychol Assess. 1997; 9(4): 445–451. Publisher Full Text\n\nWeiss DS, Marmar CR: The impact of event scale-revised. In wilson jp, keane tm, editors. Assessing psychological trauma and ptsd, 1997. Reference Source\n\nSundin EC, Horowitz MJ: Horowitz's Impact of Event Scale evaluation of 20 years of use. Psychosom Med. 2003; 65(5): 870–876. PubMed Abstract | Publisher Full Text\n\nGoldberg D, Williams P: A user’s guide to the General Health Questionnaire. Windsor: NFER-NELSON., 1998. Reference Source\n\nFoa EB, Ehlers A, Clark DM, et al.: The posttraumatic cognitions inventory (PTCI): Development and validation. Psychol Assess. 1999; 11(3): 303–314. Publisher Full Text\n\nHornsveld HK, Houtveen JH, Vroomen M, et al.: Evaluating the effect of eye movements on positive memories such as those used in resource development and installation. Journal of EMDR Practice and Research. 2011; 5(4): 146–155. Publisher Full Text\n\nHassard A, Turner H, Smith K: Dataset 1 in: Dose response and working memory limit in an eye movement desensitisation and reprocessing prospective case series. F1000Res. 2018. http://www.doi.org/10.5256/f1000research.15648.d217183\n\nCanty A, Ripley B: boot: Bootstrap r (s-plus) functions. R package version. Reference Source\n\nHassard A, Turner H, Smith K: Dataset 2 in: Dose response and working memory limit in an eye movement desensitisation and reprocessing prospective case series. F1000Res. 2018. http://www.doi.org/10.5256/f1000research.15648.d217186\n\nHassard A, Turner H, Smith K: Dataset 3 in: Dose response and working memory limit in an eye movement desensitisation and reprocessing prospective case series. F1000Res. 2018. http://www.doi.org/10.5256/f1000research.15648.d217184\n\nHassard A, Turner H, Smith K: Dataset 4 in: Dose response and working memory limit in an eye movement desensitisation and reprocessing prospective case series. F1000Res. 2018. http://www.doi.org/10.5256/f1000research.15648.d217185\n\nKO McGraw, Wong SP: A common language effect size statistic. Psychol Bull. 1992; 111(2): 361–365. Publisher Full Text\n\nMooney C, Duval RD: Bootstrapping: A Nonparametric Approach to Statistical Inference. Sage, 1993. Reference Source\n\nR Core Team: R: A language and environment for statistical computing. 2017.\n\nDavison AC, Hinkley DV: Bootstrap methods and their applications. 1997. Reference Source\n\nMarcus SV, Marquis P, Sakai C, et al.: Controlled study of treatment of PTSD using EMDR in an HMO setting. Psychother. 1997; 34(3): 307–315. Publisher Full Text\n\nHolmes EA, Grey N, Young KA: Intrusive images and \"hotspots\" of trauma memories in Posttraumatic Stress Disorder: an exploratory investigation of emotions and cognitive themes. J Behav Ther Exp Psychiatry. 2005; 36(1): 3–17. PubMed Abstract | Publisher Full Text\n\nMILLER GA: The magical number seven plus or minus two: some limits on our capacity for processing information. Psychol Rev. 1956; 63(2): 81–97. PubMed Abstract | Publisher Full Text\n\nCowan N: The many faces of working memory and short-term storage. Psychon Bull Rev. 2017; 24(4): 1158–1170. PubMed Abstract | Publisher Full Text\n\nJonides J, Lewis RL, Nee DE, et al.: The mind and brain of short-term memory. Annu Rev Psychol. 2008; 59: 193–224. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichardson JT: Measures of short-term memory: a historical review. Cortex. 2007; 43(5): 635–650. PubMed Abstract | Publisher Full Text\n\nCrick F, Mitchison G: REM sleep and neural nets. The Journal of Mind and Behavior. 1986; 229–249.\n\nGunter RW, Bodner GE: How eye movements affect unpleasant memories: support for a working-memory account. Behav Res Ther. 2008; 46(8): 913–931. PubMed Abstract | Publisher Full Text\n\nPostle BR, Idzikowski C, Sala SD, et al.: The selective disruption of spatial working memory by eye movements. Q J Exp Psychol (Hove). 2006; 59(1): 100–120. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaxfield L, Melnyk WT, Hayman GCA: A working memory explanation for the effects of eye movements in EMDR. Journal of EMDR Practice and Research. 2008; 2(4): 247–261. Publisher Full Text\n\nJha AP, Stanley EA, Kiyonaga A, et al.: Examining the protective effects of mindfulness training on working memory capacity and affective experience. Emotion. 2010; 10(1): 54–64. PubMed Abstract | Publisher Full Text\n\nvan den Hout MA, Engelhard IM, Smeets MAM, et al.: Counting during recall: Taxing of working memory and reduced vividness and emotionality of negative memories. Appl Cogn Psychol. 2010; 24(3): 303–311. Publisher Full Text"
}
|
[
{
"id": "44265",
"date": "19 Mar 2019",
"name": "Lonneke I.M. Lenferink",
"expertise": [
"Reviewer Expertise Clinical psychology",
"trauma",
"grief"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this study the authors conducted a prospective case series study to test the hypothesis that on average 7 distressing images need to be treated in EMDR treatment in a sample of patients who experienced various traumatic events. My suggestions for improvements of the manuscript are listed below.\nAbstract:\n“This result was replicated twice in separate retrospective case series.” It is not clear from the Background or Methods section in the abstract that a replication study was conducted. Please provide more information about this in the Abstract. “This required three treatment sessions in a total of five appointments, on average.” This is somewhat confusing. How can three sessions be offered in five appointments? “If this bandwidth is liberated by treatment, this permits the cognitive and emotional change observed in EMDR treatment.” It is not clear from the abstract that cognitive and emotional responses were assessed in this study. Please provide information in the abstract about the instruments and analyses used in this study.\nIntroduction:\n“There is no explanation of how this procedure works.” In my view, this is too strongly phrased. Prior research (e.g., from van den Hout & Engelhard) have offered possible explanations for how EMDR works. On the basis of clinical experience, we hypothesize that EMDR is superior to cognitive behavioural therapy (CBT) because it is faster.” This is not really convincing. There is a large body of research comparing CBT with EMDR. Please cite these studies. “The published trials do not always balance the number of sessions or treatment time.” Please provide a citation here. “Therefore, EMDR clients are getting 10 hours of treatment, while CBT clients are getting 70 hours. If the result shows equal results for each therapy, then we should conclude that EMDR is seven times faster than CBT and avoids the problem of homework compliance.” This conclusion is somewhat overstated. It is true that, if shown to be equally effective, EMDR avoids the problem of homework compliance. However, working on homework assignments in CBT or processing the aftermath of EMDR in between sessions, can both be taxing for clients and takes up time. So the conclusion that EMDR is 7 times faster leaves out the part that EMDR could be emotionally taxing for clients and takes time to process.\nMethods:\nThe original analysis plan, using non-parametric, required 50 patients to complete. However, this plan was superseded by the analysis described below.” Please provide more details about this. What type of non-parametric tests were planned? Was the original number of 50 patients based on a power-analysis? If so, please provide details. “Patients in the Genitourinary Medicine clinic (GUM) were offered EMDR whenever distressing images were reported.” What kind of patients were offered EMDR in the GUM? Please provide more details about the clinic and patients. “No a prior judgements of single trauma event, multiple events or complexity of case were made.” What do you mean with “no a prior judgement”? “EMDR treatment reduces clarity and perception of distress images and consequently the distress score reduces.” This sentence does not fit the heading “Measures” and better fits in the “Therapeutic procedure” section. “On the first treatment appointment, the Foa questionnaire was also given.” Why did you only assess the Foa questionnaire at the start of treatment and not also at the end? “We investigated if there were any systematic differences between completers and dropouts, using the data collected.” This sentence is more appropriate under the heading “Data analysis”. Please provide details about the statistical approaches used to assess differences between completers and dropouts. The letter contained a list of recorded distress images and the three questionnaires.” Please provide more details. Why did the letter contained a list of distress images? And which three questionnaires? Please summarize which statistical methods were used in the Data Analysis section, instead of referring to the R script only. The “Two Informal Replications” part is not clearly introduced earlier in text. It is now somewhat abruptly phrased in text. To avoid confusion, it would be helpful to refer to Sample 1, Sample 2, and Sample 3 throughout the text. For instance, the Results section starts with describing the patients, but it is now not clear of which sample the characteristics are described. Please provide more details about the measures used (e.g., how many items, answer options).\nResults:\nPlease provide details on the outcomes of the tests to compare completers with dropouts.\nDiscussion:\n\"This probably accelerates treatment. Only an average of three treatment sessions and five in total were required per patient.” The high dropout rate (i.e., only 50 out of 130 completed treatment) is an important limitation of this study and needs to be taken into account when drawing conclusions.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "5601",
"date": "16 Jun 2020",
"name": "Alan Hassard",
"role": "Author Response",
"response": "Response to Dr Lenferink.This is a response to the qualified approval of Dr Lenferink and answer to issues raised. (1) It is advised that more information is included in the abstract. However, the abstract is intended as a summary so not all issues can be covered there. Issues raised about the abstract are considered below. For example, emotional and cognitive change was measured with the questionnaires give in the method.(2) There was no prospective replication.The two replication series were an attempt to mitigate the major problem with this project. That is both the prediction and data collection were by the same person. For more on this issue, please see reopens to Dr Mertens.(3) There was a total of five appointments, average. First, assessment and briefing. Second, three EMDR treatment appointments. Last, the follow-up and review appointment.I used \"session\" and \"appointment\" to mean the same thing, which might have caused confusion.(4) One aim of this study was to investigate an idea about how EMDR works. This is considered in the discussion. There are many papers speculating how EMDR works. We could not consider them all here.(5) There is no citation here concerning the issue of unbalanced trauma memory exposure time in control and ENDR groups, because that is an issue being raised in this paper.A discussion of it can be found in Chapter 12 of Shapiro(2018). I had not read this when the report was written.Shapiro F. (2018). Eye Movement Desensitization and Reprocessing (EMDR) Therapy. Guilford Publications.(6) The argument is to compare exposure time to trauma memories in the therapeutic procedure. No doubt both groups ate \"processing\", that is thinking about the trauma experience, from beginning to end of the treatment. (7) The original advice from the university statistics service was 50, based on a power analysis program. I cannot now provide further details which I understand is not the best answer. However, it became apparent as the data came in that this advice was wrong of at least based on a misunderstanding. The number of images was not based on a random distribution. Nothing was known about their distribution. That is what we were investigating.(8) Further information is requested about the patients, but not clear what is needed. This is a medical clinic providing sexual health and related services. This includes sexual assault. Other patients who attended for medical reasons were included if they reported trauma memories. Details in Table 1. (9) No prior judgment means that no judgments about single or multiple bad life events or other anxiety issues were made before treatment. All entered EMDR. (10) This sentence describing changes in image clarity and distress is included in the methods section to describe what needs to be measured.(11) The Fos PTSD questionnaire was intended as a measure of case severity to correlatewith case charcuteries such as age, sex, etc. The three questionnaires used to measurechange were not PTSD specific. (12.) Before and after the scores were analyzed with the Wilcoxon signed-rank test. The effect was also expressed with the Common language effect size This is reported under Tables 2, 3,and 4. How should we test the result of seven? We could argue that it is an observation. Since there is no theoretical model there is no conventional statistical model to test it against. We demonstrate it with the running mean plots in Figs 1,2,3 and 4. We also investigated the distribution using bootstrapping randomization statistics as described. (13) No difference was found in a comparison between completers and dropouts. Tests: Age/Mann Whitney test Sex/Chi square Married/single/Chi square Employed?/Chi square Presented trauma Chi squee Mental health/addiction/ Chi square Presence of Panic/Chi square Trauma event, how long ago/Mann Whitney test (14.) The high drop out from the treatment rate is typical of this clinic. Nobody wants to attend a sexual health clinic. However, we also had added an intention to treat the value of 6 images made up of the 50 completed cases, plus the 49 who attended at least two sessions and recorded at least one F value. Alan HassardPlymouth, UK10 June 2020"
}
]
},
{
"id": "60129",
"date": "02 Mar 2020",
"name": "Gaëtan Mertens",
"expertise": [
"Reviewer Expertise Fear conditioning",
"learning",
"memory"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript titled “Dose response and working memory limit in an eye movement desensitization and reprocessing prospective case series” by Hassard and colleagues, the authors investigate the number of distressing images treated in EMDR treatment. The authors predicted that this number should average out to approximately 7 images and indeed based on a sample of 50 patients, this approximate number was achieved and replicated in two sets of archival cases. My overall impression was that this manuscript addressed an interesting question and the results seem to confirm the a priori set prediction. However, I have some serious reservations regarding the validity of the results given potential participants’ and researcher bias and unclarity with regard to how an ‘imagine’ was defined and counted. I also found the rationale for this study not particularly clear and convincing. I expand more on these reservations below.\nIntroduction: For me, it was quite unclear to how this specific relates to the discussion regarding the comparison between EMDR and CBT and how this study can show that EMDR therapy is faster than CBT. To my knowledge, CBT is not (necessarily) specifically concerned with treating distressing images of patients. As such, this study is not very informative of how CBT and EMDR can be compared and whether or not EMDR is a more efficient (faster) treatment than CBT. I would suggest to focus the introduction more specifically on EMDR therapy only and how many sessions may be required there. Also, a clear argument is missing why the authors would expect that patients would, on average, suffer from 7 distressing images. My guess is that this is based on working memory theory in which it has been proposed that the number of distinct units of information that people can keep online in their short term working memory centers around 7 (e.g., Baddeley, 1994)1. However, to my knowledge, this proposal is somewhat controversial. Furthermore, I’m not sure why the capacity of working memory would have any bearing on patients’ number of distressing images, which are most likely based on long term memory. Why would the capacity of WM limit the number of distressing images of patients that require treatment to 7? Finally, I’m missing a link to other research investigating emotional memories and images in mental disorders (e.g., Brewin, Gregory, Lipton, & Burgess, 2010; Hackmann, Ehlers, Speckens, & Clark, 2004; Hirsch & Holmes, 2007)2,3,4. I short, I suggest that the authors remove the second paragraph in which they discuss EMDR therapy in relationship to CBT, but rather focus their introduction on why they expect specifically 7 distressing images and how this may inform us about the number of treatment sessions required (is 1 treatment session sufficient to treat 1 intrusive image?).\nMethods: For me, there seem to be two major problems with the methods relating to this article: First, patients were explicitly informed about the seven images average (see P. 4) and, as far as I can see, also the therapist was aware of the hypothesis that on average 7 images should be expected. This is problematic, as it can easily lead to biases and desirable responding (Orne, 1962)5. This, to me, seems a major threat to the validity of the results, as it may just reflect bias on the part of the experimenter and demand compliance on the part of the patients. Is there any argument/reason that the authors have that their results do not reflect bias and experimental demand? In my view, this study should be replicated with blinded experimenters and patients. A second major threat to the validity of the results is that it is not clearly defined what a ‘distressing image’ is and how this was counted. Also, apparently, larger images could be split up into smaller event sequences and these would then be considered as separate sensory images (see P. 4). This lack of a clear definition of what an image is and the possibility to subdivide images provides even a greater amount of freedom for the authors to define distress images and for the numbers to influence by biases. The authors should (have) define(d) more explicitly what they considered to be a distressing image. Furthermore, it should be clarified how this was counted/calculated and by who. If it was done by only one author, this should be clearly indicated as a potential threat to the reliability of the count. If counting was done by several authors (which would be much better), inter-rater reliabilities should be calculated and reported.\nDiscussion/interpretation: The authors conclude in the Discussion that “the distress complex tends to contain seven, plus or minus two, distress images”. However, reading the results, the average number of images was 7.68 within the boundaries 7.20 and 8.20 (based on all cases). This seems more than the expected number (i.e., closer to 8). Furthermore, there is substantial variation between the patients in the number of distress images (1-23). I wonder whether these results support their hypothesis. It seems in fact that the actual number is slightly higher than 7. Furthermore, the authors mention in the Discussion that ”this attraction towards seven raises the possibility of involvement of the working memory”. However, as I mentioned previously, to me distress images are related to long term memory, rather than working memory. It seems unlikely that any of the patients has a working memory span of 23. Furthermore, 7 images and more tends towards the highest percentiles of WM capacity (in healthy participants/students). In contrast, there is some evidence to suggest that typically anxiety patients have reduced WM capacity (Hayes, Hirsch, & Mathews, 2008)6. Hence, it seems unlikely that the amount of distress images indeed reflects or is related to WM capacity. A more direct test of this hypothesis would be to measure patients’ working memory capacity and correlate the number of reported distress images directly to WM span scores.\n\nTaken together, my worry regarding this work is that it may be critically influenced by biases on the part of the experimenters and patients. As far as I can judge, no precautions have been taken to protect the results against such biases and there are no clear arguments given why the results can be considered to be robust against these biases. Additionally, I have reservations about the embedding of the motivation for and results of this study in the wider literature. Nonetheless, the topic of study (number of distressing images and number of required treatment sessions) is definitely of interest for the literature on psychotherapy. I suggest additional clarifications and research (i.e., double blind, clear definitions, independent raters) to permit more clear conclusions.\n\nMinor comments:\nTable 5: The cumulative percentage do not seem correct. Particularly, 2 patients reported 3 or less distressing images (4%). However, in the cumulative % column, it says 2%.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "5602",
"date": "15 Jun 2020",
"name": "Alan Hassard",
"role": "Author Response",
"response": "Response to Dr MertensThank-you for reviewing our report. The critique made has validity. My defense, or at least plea for mitigation, is partly theoretical and partly about the limitations of research in a clinical situation. 1) How should we start research? I suggest that the usual method in psychology is to start with a theory and test it. This would be a rationale or theory (a priori.) or derived from previous research. This did not happen here. There was no previous theory, concerning the influence of working memory or images. There was an observation made in clinical practicethat requited whatever test that could be contrived in the limitations of that situation. If we can test and replicate the observation, perhaps we can find an explanation (a posteriori). This is a legitimate inquiry. One might observe that the Earth rotates around the Sun atan average distance of 150 million km and then develop an explanation. 2) The prediction was not that each patient will report 7 distress images. It was that the distribution of images for 50 patients will show a mean, mode or median of seven. 3) The observation was made in clinical practice by myself. It required testing and there was no other way of testing other than collecting data as one went along. This required the originator of the hypothesis to also be the collector of the data. If this arouses concern about experimenter bias or curing the patients, then there is no defense against this charge, other than necessity.4) There are 3 ways to reduce this problem. First, The briefing clearly explained to patients that seven was the avenge but they should report all distress mages they needed. We think it reasonable that the patient would understand that they were not excepted to produce exactly seven. It was made clear that all distress images should be reported, to any bad life events, panic, flash-forwards or any other unlabelled interoceptive bad core effect.We also argue that informed consent required this. One premise of this work is that patients who have suffered more than 7 bad life events, such as years of domesticabuse, still find their place in this distribution. Repeated episodes become semantic memories, which are also EMDR targets. To achieve informed consent, this information needs to be explained. 5) The before and after test scores given in Table 2.3 and 4 show high difference, low probability, changes. It is a reasonable inference that this could only happen if the whole “distress complex“ of average 7 images was treated. If the patients were restrictingto a report of 7, to please the therapist, this would not happen. Half the patients, with above seven images, would not complete treatment and (probably) retain high scores. 6) The inclusion of the two archival, audit, series was intended to support the prospective series. We could not replicate the result across therapists but could across time withone therapist. This is the weakest argument, so I am writing it last. One could arguethat the same problems occurred in all 323 cases. 7) A major problem raised is that the therapist knew the prediction and might bias the results to seven. This is valid, but what to do about it? The solution offered is a double-blind replication in which both patient and therapist are not informed about the seven. This is perhaps the ideal but how is this achieved?. Arguments about keeping the patient in the dark are given above. How does one find an experienced EMDR therapist to do this without any discussion of the reason? The EMDR method advocated here is different from the conventional EMDR method. That is part of the demonstration reported here. Even if the resources were available and the situation permitted, it would require the EMDR therapist(s) to spend one or two years with 50 patients without enquiring or finding out why this particular EMDR version was in play, This is not the same as the blindfolded assessment used in some trials. Perhaps it is possible in well-funded research institute trials but how would one active that status? Perhaps by demonstrating it in an observational prospective case series. However, that demonstration would inform potential EMDR therapists of the seven. 8) Another way could have been compared with other EMDR therapists. Other therapists were initially in the project but left due to misadventure or retirement. 9) The introduction addresses the odd problem Of unbalanced randomized controlled trials (RCT) for EMDR versus CBT. The question raised is how does that lead to thisperspective series. If trials are to balance treatment time between the 2 methods, then how much treatment time is needed? This will be controlled by how much treatment is needed. If there is a limit, even if only expressed statistically, on the number of images, then that can define how much treatment is needed. Our report address this issue, since we could not do an RCT.An RCT that addresses this issue has recently appeared. It demonstrated EMDR patients needed 21 hours, average, to discover and treat 4.2, trauma memories compared with 63 hours for 1.5 memories for prolonged exposure. (Stanbury et al 2020) Stanbury, M., Drummond, P.D., Laugharne, J., Kullack, C.,& Lee, C.W. (2020). Comparative Efficiency of EMDR and Prolonged Exposure in Treating Posttraumatic Stress Disorder: a Randomized Trial. JHournal of EMDR Practice & Research,14(1), 2-12.10) This project was intended to make the best test of a clinical observation, that could be made when we we obliged to stay in the clinical situation. There was no research grant and only minimal support from other sources. The project is essentially little more than an audit, which also tests an observation. The method and R-program used were transferred to routine audits. Alan HassardPlymouth, UK10 June 2020"
}
]
}
] | 1
|
https://f1000research.com/articles/7-1471
|
https://f1000research.com/articles/7-1465/v1
|
14 Sep 18
|
{
"type": "Research Article",
"title": "Effect of various organic acid supplementation diets on Clarias gariepinus BURCHELL, 1822: Evaluation of growth, survival and feed utilization",
"authors": [
"Lia Asriqah",
"Rudy Agung Nugroho",
"Retno Aryani",
"Lia Asriqah",
"Retno Aryani"
],
"abstract": "Background: The purpose of the current study was to determine the growth status, survival and feed utilization of catfish (Clarias gariepinus BURCHELL, 1822) fed various organic acid supplementations. Methods: In total, 1600 fish were randomly distributed into 20 tanks and fed different types of diet: A, control diet without organic acid supplementation; B, control diet supplemented with 0.05% formic, acetic, and propionic acid; C, control diet supplemented with 0.1% formic, acetic, and propionic acid; D, control diet supplemented with 0.05% butyric acid; E, control diet supplemented with 0.01% butyric acid. The control diet was a commercial diet, containing 35% crude protein, 8.58% crude fat, and 2.75% fibre. All fish were fed using a satiation method, three times per day for 56 days. At the end of the trial, growth, survival and feed utilization were determined. Water quality parameters during the trial were also measured once a week. Results: Fish fed diet type D had the significantly lowest (P<0.05) final weight (FW), weight gain (WG), and specific growth rate (SGR) of all diets. Similar FW, WG, and SGR were found for fish fed diets A-C and E. Meanwhile, the feed conversion ratio, feed efficiency, and survival rate of fish were not affected by any types of diet. The water quality parameters were not significantly different between tanks and weeks: dissolved oxygen 6.79-6.81 mg L-1, pH 7.11-7.19, water temperature 28.97-29.32°C, nitrite (NO2) content 0.48- 0.50 mg L-1, and ammonia (NH3) content 0.064-0.066 mg L-1. Conclusion: The supplementation of 0.05% butyric acid in the diet of C. gariepinus for 56 days reduced the growth performance of the fish. However, supplementation of an organic acid in the diet of C. gariepinus had no impact on feed utilization, survival, and water quality parameters.",
"keywords": [
"Organic acid",
"Growth",
"Survival Rate",
"Feed utilities",
"Clarias gariepinus"
],
"content": "Introduction\n\nOptimum and balanced nutrition, especially in fish culture, is a significant requirement and contributes up to 40–60% of production cost of farmed fish1,2. The balance of a commercial diet that enhances optimum fish growth and health has attracted much research to develop a specific diet formulation1. It is also well known that the use of antibiotics or chemical substances as a growth promoter in the feed of fish may help to improve growth, survival, and feed utilization. However, wider concerns regarding the negative effects to the environment has led to a ban of the use of such chemical substances in the field of aquaculture3.\n\nPrevious research stated that the use of non-chemical substances, such as acidifiers, to increase growth performance has been performed in several fish. Dietary supplementation of citric acid/formic acid increases the bioavailability of minerals, including phosphorus, magnesium, calcium and iron in rainbow trout (Oncorhynchus mykiss), sea bream (Pagrus major) and Indian carp (Labeo rohita)4,5. Some researchers also claimed that dietary acidifiers in the feed of fish reduce the pH in the stomach and foregut, which help improve pepsin activity, enhancing protein metabolism and mineral intake of the intestines4,6. In addition, these short-chain organic acids are generally absorbed through the intestinal epithelia by passive diffusion, providing energy for renewing the intestinal epithelia and maintaining gut health7.\n\nBesides nutritional concern in aquafeed, generally aquaculture activities commonly produce waste, such as feed remains and feces, which can be converted into ammonia and nitrite. Further, the level of ammonia (NH3) and nitrite (NO2) increases rapidly in a closed culture system and can be harmful to fish8,9. Thus, water quality parameters are a major concern in the aquaculture system. Previous research revealed that the values of water quality parameters during fumaric acid feeding experiments on the African catfish (Clarias gariepinus) are relatively stable, providing a dissolved oxygen concentration 7.23-7.86 mg L-1, water temperature 25.13-25.270C and pH 7.23-7.4810.\n\nA strain of African catfish, Clarias gariepinus BURCHELL, 1822, is a popular species for aquaculture industry in Asian countries. In Indonesia, the production of catfish is been the second largest after tilapia, reaching a production from 144,755 MT in 2009 to 644,221 MT in 201311. Catfish has pseudo-lungs, long bodies and a high capacity to produce mucous as a form of adaptation to live in stagnant environments or drought conditions. It is omnivorous, feeding on various feeds, such as plant material, plankton, arthropods, molluscs, fish, reptiles, and amphibians12. Compared to other species, catfish is more resistance to diseases, handles stressors well and has a high growth performance13. To increase growth performance, aquaculturists and researchers have added various supplementations to the diet of catfish14–16. However, the information regarding supplementation of organic acid (formic, acetic, propionic and butyric acid) in the diet of catfish is very rare. Thus, the aim of the current experiment was to evaluate the growth performance, feed utilization, and survival of catfish fed different types of diet, containing organic acid.\n\n\nMethods\n\nThe research was performed at PT Suri Tani Pemuka Unit Research and Development, Ciranjang, West Java, Indonesia from March to May 2018. All C. garipienus were provided by PT Suri Tani Pemuka (Cisarua, Tegal Waru, HIAT Purwakarta Regency, West Java, Indonesia). The fish were kept in oxygenated polythene bags and transported by truck to PT Suri Tani Pemuka, Research and Development Farm, Ciranjang West Java, Indonesia. Then, the fish had been adapted and grown under farming conditions.\n\nThe study was carried out within The PT Suri Tani ethical protocols of the farm.\n\nFive groups in five separate tanks, namely: A, control diet without organic acid supplementation; B, control diet supplemented with 0.05% formic, acetic, and propionic acid; C, control diet supplemented with 0.1% formic, acetic, and propionic acid; D, control diet supplemented with 0.05% butyric acid; E, control diet supplemented with 0.01% butyric acid. The control diet which was provided from a commercial diet (provided by PT Suri Tani Pemuka, Purwakarta, West Java, Indonesia), containing 35% crude protein, 8.58% crude fat, and 2.75% fibre. The study was repeated four times. All fish were maintained in a plastic tank (vol. 520 L) at a stocking density of 80 fish per tank and reared for 56 days.\n\nIn total 1600 fish with an initial average weight 8.78 g were randomly assigned into 20 plastic tanks (80 fish/tank) with a volume of 520 L. Each tank was filled with fresh water up to 500 L and the fish were stocked at the density of 80 fish tank-1. The fish were fed with diets A–E three times per day (01:00, 05:00 and 09:00 GMT) using satiation methods for 56 days.\n\nBiomass (g) of the fish per tank were measured at the beginning and the final day of the study. Meanwhile, the weight gain was calculated using equation:\n\nW = (Wt/Nt)-(W0/N0)\n\nwhere W is weight gain (g), Wt is the weight of the fish at the end of trial (g), and Wo is the weight of fish at the beginning of the trial (g). The feed utilization and survival rates were determined following equations that were previously used by Muchlisin17 and Nugroho18:\n\nFeed efficiency (FE) = 1/ FCR × 100%\n\nwhere FCR = feed conversion ratio:\n\nFCR = F / (Wt – Wo)\n\nwhere F= total feed intake (g).\n\nSurvival rate (SR) = (Nt/N0) × 100%\n\nwhere Nt is total fish at the end of experiment and N0 is total fish at start of experiment.\n\nThe water quality parameters such as dissolved oxygen (DO) and temperature were measured using a digital water checker (YSI™ Model 550A Dissolved Oxygen Meter; Fisher Scientific, USA). pH was measured with a pH-meter (CyberScan pH 11; EuTech Instruments, Singapore). Meanwhile, NO2 and NH3 were detected using Sera test kit (Sera GmbH D52518, Heinsberg, Germany). All the water quality parameters were measured once a week.\n\nResults are expressed as means ± standard error (SE) and data were analysed using SPSS version 22 (SPSS, Inc., USA). The data of survival (%) was transformed using arc sine before statistical analysis. Meanwhile, growth analysis and water quality were subjected to analysis of variance (ANOVA), followed by Duncan post hoc test to evaluate significant differences among the groups of treatments. All significant tests were at P>0.05.\n\n\nResults\n\nBased on the statistical analysis, the present results showed that both the control diet (A) and the supplementation organic acid in the diet of Clarias gariepinus (B–E) had no significant effect (P>0.05) on the feed conversion ratio (FCR), feed efficiency (FE), and survival rate (SR). The trial also showed that fish fed diet D had the significantly lowest (P<0.05) final weight, weight gain, and specific growth rate (SGR), but a similar final weight, weight gain, and SGR were found on fish fed diets A–C, and E (Table 1).\n\nDifferent alphabets (a, b) indicate significantly different means for different group of diets at P < 0.05. A = control diet without organic acid supplementation; B = supplemented-control diets with 0.05% mix of formic, acetic, and propionic acid; C = supplemented-control diets with 0.1% mix of formic, acetic, and propionic acid; D = supplemented-control diets with 0.05 % of butyric acid; E = supplemented-control diets with 0.1% of butyric acid; SGR = Specific growth rate, FCR = Feed conversion ratio, FE = Feed efficiency, SR = Survival rate. The control diet was a commercial diet, containing 35% crude protein, 8.58% crude fat, and 2.75% fibre.\n\nThe water quality parameters during the study showed that the supplementation organic acid in the diet of Clarias gariepinus had no effects on the water quality culture. Dissolved oxygen ranged 6.81–6.88 mg L-1; pH 7.12–7.21, and water temperature 27.07–29.50°C. Meanwhile, nitrite (NH2) content ranged from 0.045 to 0.057 mg L-1 and the ammonia (NH3) content ranged from 0.372 to 0.50 mg L-1 (Table 2).\n\nMean ± SE followed by same superscript letter (a) indicated not significantly different at P < 0.05. Water quality parameters were measured once a week during the study. A = control diet without organic acid supplementation; B = supplemented-control diets with 0.05% mix of formic, acetic, and propionic acid; C = supplemented-control diets with 0.1% mix of formic, acetic, and propionic acid; D = supplemented-control diets with 0.05 % of butyric acid; E = supplemented-control diets with 0.1% of butyric acid. The control diet was a commercial diet, containing 35% crude protein, 8.58% crude fat, and 2.75% fibre.\n\nThe data showing the growth parameters such as initial and final weight, total weight gain, and total feed consumed by fish for every experimental group, and water quality parameters can be seen in Dataset 1.\n\n\nDiscussion\n\nThe present results revealed supplementation of organic acid in the diets had no significant effect (P>0.05) on the feed conversion ratio (FCR), feed efficiency (FE), and survival rate (SR). However, dietary supplementation of 0.05% of butyric acid (D) in the diet of C. gariepinus resulted in a significantly lower (P<0.05) final weight, weight gain, and SGR compared with other diets. A similar final weight, weight gain, and SGR were also found for fish fed a control diet (A), and those fed with 0.05% (B) and 0.1% (C) mix of formic, acetic, and propionic acid, and 0.1% (E) of butyric acid. These findings are in line with a previous study where dietary 0.5 g kg-1 butyric acid supplementation in the diet of Clarias gariepinus found no significant difference in weight gain, SGR, SR, and FCR. In contrast, weight gain, SGR, SR, FE and FCR of Oreochromis niloticus were significantly improved after being fed 0.5 g kg-1 butyric acid supplementation in the diet16.\n\nAccording to Da Silva et al.19, butyrate acid in shrimp diets could be feed attractants for fish, which improve feed intake. Organic acids such as butyric acid improve the feed intake, gut and gastrointestinal tract activity of a red hybrid tilapia, Oreochromis sp., by the reduction in pH3,20. The other benefits of butyric acid for improving growth is attributed to the aroma which acts as an attractant in the diet of shrimp21. However, a past study found that the increasing levels of dietary organic acid such as fumaric acid (1.5–2 g kg-1) in the diet of C. gariepinus significantly reduced growth performance and feed utilities and improved survival rate after a challenge test with bacteria10. These findings might be correlated with pH balance in the gut of the fish fed dietary high levels of organic acid. Furthermore, various concentrations of the organic acids such as propionic acid and acetic acid, have been determined to have effects on the feeding behaviour of Oreochromis niloticus. The supplementation of propionic acid at 10-4–10-6 M can stimulate feeding22. However, dietary propionic acid at 10-3 M may suppress feeding. In addition, past research has also found that dietary supplementation of acetic acid at 10-5 M had no effect on fish feeding. Lim et al.23 revealed that the beneficial of the organic acid supplementation in the diet of fish may vary among fish and tend to be inconsistent, depend on the dietary ingredient, culture system, and water quality.\n\nIt is clear that feed remains in the water medium might change the water quality. Current findings stated that the water quality parameters during the trials showed no effects on the medium fish culture during the present study (Table 2). These findings are consistent with a past study by Omosowone and Adeparusi16, stating that water quality parameters such as temperature, dissolved oxygen and pH measured in a similar current experimental setups are all within the accepted range for the culture of fin fishes in tropical regions, as recommended by National Research Council (USA)24.\n\n\nConclusion\n\nThe inclusion of organic acid in the diet of C. gariepinus had no impact on the feed utilities, survival, and water quality parameters in the present study. However, the inclusion of 0.05% butyric acid in the diet of C. gariepinus for 56 days reduced growth performance and feed utilization. Further research needs to be conducted to evaluate the effects of organic acid supplementation in the diet of fish on digestive enzyme activity, gut bacteria population, and fillet proximate analysis.\n\n\nData availability\n\nDataset 1: The initial and final weight, body weight gain, survival, and total feed consumed by fish for every experimental group (A–E) and water quality parameters. DOI, 10.5256/f1000research.15954.d21648625.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors thank the PT Suri Tani Pemuka Unit Research and Development Cianjur, East Java, Indonesia for supporting this research with any kinds of facilities. All authors also thank the Faculty of Mathematics and Natural Sciences, Mulawarman University, Samarinda, East Kalimantan. The appreciation goes to all of our students who helped the authors during the trial in the field.\n\n\nReferences\n\nCraig S, Helfrich LA, Kuhn D, et al.: Understanding fish nutrition, feeds, and feeding. 2017. Reference Source\n\nFadri S, Muchlisin Z, Sugito S: Growth performance, survival rate and feed utilization of Nile tilapia, Oreochromis niloticus fed experimental diet contains jaloh leafs, Salix tetrasperma Roxb at different levels of EM-4 probiotic. Jurnal Ilmiah Mahasiswa Kelautan dan Perikanan Unsyiah. 2016; 1(2): 210–221.\n\nLuckstadt C: The use of acidifiers in fish nutrition. Perspectives in Agriculture, Veterinary Science, Nutrition and Natural Resources. 2008; 3(044): 1–8. Publisher Full Text\n\nJun-sheng L, Jian-lin L, Ting-ting W: Ontogeny of protease, amylase and lipase in the alimentary tract of hybrid juvenile tilapia (Oreochromis niloticus × Oreochromis aureus). Fish Physiol Biochem. 2006; 32(4): 295–303. Publisher Full Text\n\nVielma J, Lall S: Dietary formic acid enhances apparent digestibility of minerals in rainbow trout, Oncorhynchus mykiss (Walbaum). Aquac Nutr. 1997; 3(4): 265–268. Publisher Full Text\n\nLückstädt C: Effect of organic acid containing additives in worldwide aquaculture–sustainable production the non-antibiotic way. Acidifiers Anim Nutr. 2008; 71.\n\nAbu Elala NM, Ragaa NM: Eubiotic effect of a dietary acidifier (potassium diformate) on the health status of cultured Oreochromis niloticus. J Adv Res. 2015; 6(4): 621–629. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSakala ME, Musuka CG: The effect of ammonia on growth and survival rate of tilapia rendalli in quail manured tanks. International Journal of Aquaculture. 2014; 4.\n\nSidik A: The effect of stocking density on nitrification rate in a closed recirculating culture system. Jurnal Akuakultur Indonesia. 2007; 1(2): 47–52. Publisher Full Text\n\nOmosowone O, Dada A, Adeparusi E: Effects of dietary supplementation of fumaric acid on growth performance of African catfish Clarias gariepinus and Aeromonas sobria challenge. Croatian Journal of Fisheries. 2015; 73(1): 13–19. Publisher Full Text\n\nFauji H, Budiardi T, Ekasari J: Growth performance and robustness of African Catfish Clarias gariepinus (Burchell) in biofloc‐based nursery production with different stocking densities. Aquac Res. 2018; 49(3): 1339–1346. Publisher Full Text\n\nVitule JR, Umbria S, Aranha J: Introduction of the African catfish Clarias gariepinus (BURCHELL, 1822) into Southern Brazil. Biol Invasions. 2006; 8(4): 677. Publisher Full Text\n\nPutra I, Rusliadi R, Fauzi M, et al.: Growth performance and feed utilization of African catfish Clarias gariepinus fed a commercial diet and reared in the biofloc system enhanced with probiotic [version 1; referees: 2 approved]. F1000Res. 2017; 6: 1545. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChris UO, Singh N, Agarwal A: Nanoparticles as feed supplement on Growth behaviour of Cultured Catfish (Clarias gariepinus) fingerlings. Materials Today: Proceedings. 2018; 5(3): 9076–9081. Publisher Full Text\n\nEl-Husseiny OM, Hassan MI, El-Haroun ER, et al.: Utilization of poultry by-product meal supplemented with L-lysine as fish meal replacer in the diet of African catfish Clarias gariepinus (Burchell, 1822). Journal of Applied Aquaculture. 2018; 30(1): 63–75. Publisher Full Text\n\nOmosowone O, Dada A, Adeparusi E: Comparison of dietary butyric acid supplementation effect on growth performance and body composition of Clarias gariepinus and Oreochromis niloticus fingerlings. Iranian Journal of Fisheries Sciences. 2018; 17(2): 403–412. Publisher Full Text\n\nMuchlisin ZA, Arisa AA, Muhammadar AA, et al.: Growth performance and feed utilization of keureling (Tor tambra) fingerlings fed a formulated diet with different doses of vitamin E (alpha-tocopherol). Archives of Polish Fisheries. 2016; 24(1): 47–52. Publisher Full Text\n\nNugroho RA, Manurung H, Nur FM, et al.: Terminalia catappa L. extract improves survival, hematological profile and resistance to Aeromonas hydrophila in Betta sp. Archives of Polish Fisheries. 2017; 25(2): 103–115. Publisher Full Text\n\nda Silva BC, do Nascimento Vieira F, Mouriño JLP, et al.: Salts of organic acids selection by multiple characteristics for marine shrimp nutrition. Aquaculture. 2013; 384–387: 104–110. Publisher Full Text\n\nNg WK, Koh CB, Sudesh K, et al.: Effects of dietary organic acids on growth, nutrient digestibility and gut microflora of red hybrid tilapia, Oreochromis sp., and subsequent survival during a challenge test with Streptococcus agalactiae Aquac Res. 2009; 40(13): 1490–1500. Publisher Full Text\n\nDa Silva BC, Vieira FdN, Mouriño JLP, et al.: Butyrate and propionate improve the growth performance of Litopenaeus vannamei. Aquac Res. 2016; 47(2): 612–623. Publisher Full Text\n\nXie S, Zhang L, Wang D: Effects of several organic acids on the feeding behavior of Tilapia nilotica. J Appl Ichthyol. 2003; 19(4): 255–257. Publisher Full Text\n\nLim C, Luckstadt C, Klesius P: Use of organic acids, salts in fish diets. Global Aquaculture Advocate. 2010; 13(5): 45–46. Reference Source\n\nNRC: Nutrient requirements of warmwater fishes and shellfishes. Washington D. C.: Subcommittee on Warmwater Fish Nutrition. National Research Council. National Academies. 1983. Reference Source\n\nAsriqah L, Nugroho RA, Aryani R: Dataset 1 in: Effect of various organic acid supplementation diets on Clarias gariepinus BURCHELL, 1822: Evaluation of growth, survival and feed utilization. F1000Res. 2018. http://www.doi.org/10.5256/f1000research.15954.d216486"
}
|
[
{
"id": "38341",
"date": "19 Sep 2018",
"name": "Gitartha Kaushik",
"expertise": [
"Reviewer Expertise Fishbiology",
"molecular Taxonomy and adaptive modifications"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested:\n\nKindly make these following corrections.\n1. Introduction: It is also well known that the use of antibiotics or chemical substances as a growth promoter in the feed of fish may help to improve growth, survival, and feed utilization ---- kindly cite the article stating this statement.\n\n2. Introduction: Previous research stated that the use of non-chemical substances such as acidifiers, to increase growth performance has been performed in several fish ---- kindly cite some previous reports.\n\n3. Introduction: Besides nutritional concern in aquafeed, generally aquaculture activities commonly produce waste, such as feed remains and feces, which can be converted into ammonia and nitrite ---- Who stated this? Kindly cite the article\n\n4. Introduction: Clarias gariepinus BURCHELL,1822 kindly write the nomenclature following FishBase. https://www.fishbase.de/summary/1934\n\n5. Methods: All C. garipienus were provided: Kindly check the spelling of the species.\n\nThe paper has well experimented. It can be accepted after these minor corrections.",
"responses": []
},
{
"id": "38357",
"date": "01 Oct 2018",
"name": "Ilham Ilham",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nManuscript titled \"Effect of various organic acid supplementation diets on Clarias gariepinus BURCHELL, 1822: Evaluation of growth, survival and feed utilization\" could be acceptable. Study design, data processing, writing, etc. are relatively good. However, some revisions need to be made to index the manuscript.\nAbstract Results: …were found for fish fed diets A-C and E - Kindly replace A-C with A,B,C …were not affected by any types of diet - write type instead of types Keywords: …. Survival rate, …Clarias gariepinus should be in italics\nIntroduction Paragraph 2: Previous research stated that the use of …… - whose research? citation? … which help improve pepsin activity, thus enhancing...\nParagraph 3: Since the word \"generally\" also means \"commonly\", please use one of those words … Previous research revealed that the values.... - whose research? citation?\nParagraph 4: A strain of African catfish, Clarias gariepinus BRUCHELL, 1822 … - The species should be written as C. gariepinus …. on various feeds, such as plan material,... - material should be in plural form, please add 's' The role of organic acids should also be mentioned in the \"Introduction\" section.\nMethods Site and time: Then, the fish had been adapted and …. - should be \"Then, the fish were adapted and …..\nExperimental design Is it true that the study was repeated four times? Or do you mean \"all treatments were designed in four replicates\"?\nFish culture and feeding trial: Again, kindly write diets A,B,C,D and E instead of diets A-E\nRegarding feed intake, how did you measure the feed intake? Was there any uneaten feeds?\nMeasured parameters: Kindly replace \"W0\" with \"Wo\" and \"N0\" with \"No\"in the formula\nResults When you rewrite the scientific name of the species i.e. Clarias gariepinus, kindly use C. gariepinus. The same concern also applicable in Discussion section. Also check it in the Table 1.\n\nIn Table 1 and Table 2, letter(s) ie. a,b.c should be written only when there was significant difference(s) among treatments. Therefore, FCR, FE and SR should not be highlighted with common superscript (a) since they were all statistically insignificant.\nDiscussion Paragraph 2: … and tend to be inconsistent, depend on the …. - kindly write \"depending on\" instead of \"depend on\"",
"responses": []
},
{
"id": "38343",
"date": "09 Oct 2018",
"name": "Indra Suharman",
"expertise": [
"Reviewer Expertise Aquatic animal nutrition (Aquaculture)"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis papers reports on the growth performance of fish fed supplementation of organic acid in the diets of catfish. From this point of view the article is very interesting and deserves to be accepted for publication.\nThe Abstract is quite clear and provide a concise conclusion of the research work. However, it is better to state the initial average weight of fish in the “Abstract”.\n\nThe Introduction is strong supported by the literature cited and the objective of this study is clear stated.\n\nThe Methods is quite clear and the analysis correctly explained what has been obtained from the designed work and appropriately reflects the topic studied.\n\nThe Discussion is clear explained and the Conclusion has justified the basis of the results.\nOverall, the paper is well written and data well analyzed. I have found very interesting results as the effect of organic acid supplementation in the diets on growth performace and feed utilization of catfish.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1465
|
https://f1000research.com/articles/7-1460/v1
|
13 Sep 18
|
{
"type": "Case Report",
"title": "Case Report: A nine year follow-up for a pacemaker generator poly-tetra-fluoro-ethylene coating for allergic reactions to pacemaker compounds",
"authors": [
"Mehdi Slim",
"Elies Neffati",
"Afef Lagren",
"Kortas Chokri",
"Amine Tarmiz",
"Rym Gribba",
"Essia Boughzela",
"Elies Neffati",
"Afef Lagren",
"Kortas Chokri",
"Amine Tarmiz",
"Rym Gribba",
"Essia Boughzela"
],
"abstract": "Background: Allergic reaction to pacemaker compounds is a rare complication of cardiac pacing. Initial management is difficult because accurate diagnosis is often delayed. The tendency is to initially suspect a bacterial infection, rather than to quickly rule out an allergy to the pacemaker components. Management of this condition is difficult and not well established. Case presentation: A 75-year-old man underwent a dual chamber pacemaker implantation. The patient needed two generator re-implantations because of sterile skin necrosis. Pace maker allergic reaction was suspected despite non-conclusive skin patch testing. The patient underwent pacemaker system removal and re-implantation of poly-tetra-fluoro-ethylene sheet coated generator in a retropectoral position. Subsequently, there has been no externalization or recurrence in nine years of follow-up. Take-away lesson: Contact allergy to pacemakers is often unrecognized. Once infection has been excluded, allergy testing must be performed. The only valuable treatment is the removal of all the system components, followed by a replacement with hypoallergenic material. Polytetrafluoroethylene coated materials can be effective to prevent recurrence.",
"keywords": [
"Contact dermatitis - Cardiac Pacemaker - Patch Tests – Polytetrafluoroethylene."
],
"content": "Introduction\n\nPacemaker system hypersensitivity is a rare complication of cardiac pacing and diagnosis is usually difficult, and is often delayed1. In fact, bacterial infection is initially suspected routinely rather than allergy to the pacemaker components1. Management of this condition is difficult and not well established.\n\nHere, we report a patient who developed repeated sterile skin necrosis leading to generator externalization. Prick testing was of a doubtful interpretation. The patient recovered after implantation of a poly-tetra-fluoro-ethylene (PTFE)-coated pacemaker generator in a retropectoral position.\n\n\nCase report\n\nA 75-year-old man with symptomatic complete atrioventricular block received a dual chamber pacemaker (MEDTRONIC SIGMA DR) in November 2000. The patient required two generator re-implantations in March 2005 and August 2007 due to sterile skin necrosis with externalization of the pacemaker generator. A St Jude Medical IDENTITY DR pacemaker was used in the two replacements.\n\nIn May 2009, the patient presented with a third externalisation of the pacemaker generator, with localized swelling and redness in the implanted area. The patient had no fever, and inflammatory parameters during blood testing were negative; except the patient had a high eosinophil fraction (0.59×103/L; normal range: 0.0–0.5 ×103/L). Blood cultures, bacterial swabs and cultures of the material taken gave negative results. Swabs from pacemaker pocket showed that C-reactive protein and procalcitonin were negative. We performed a skin prick test using a standard battery (not including titanium). Little positivity was found to nickel and chrome batteries, but skin application of another pacemaker generator (St Jude Medical IDENTITY DR) produced a suggestive skin response.\n\nThe patient underwent removal of the old pacemaker system and re-implantation of a new pacemaker system. The generator (Saint Jude Medical Victory XLDR) was entirely coated with PTFE sheet and implanted in the left chest wall in a retro pectoral position (Figure 1). The procedure was uneventful and the patient was discharged in good condition.\n\n(A) Entirely coated pacemaker generator with PTFE sheet; (B) coated generator connected to two leads implanted in retropectoral position.\n\nAfter 23 months of regular follow-up, the generator moved from a retropectoral to a subcutaneous position on chest CT scan. No recurrence of externalization or cutaneous signs was observed up to nine years after implantation\n\n\nDiscussion\n\nPacemaker component allergy is a rare but well established complication of cardiac pacing. It was first reported by Raque and Goldschmidt2 in 1970 in a patient presenting with eczematous dermatitis developed overlying the pacemaker site within 3 weeks of implantation. Since then various reports have been published3,4. Several clinical presentations are observed varying from local pain to systemic manifestations. Pacemaker-mediated dermatitis is thought to be a delayed hypersensitivity type 3 or 4 mediated reaction5. The time taken for sensitivity to develop varies from months to years5. In our case, the allergic process occurred several years after implantation. The allergen can be located in the “CAN” or other components of the pacing system. Titanium, nickel and epoxy resin are the most common allergens6.\n\nDiagnosis of pace maker allergy is difficult and infection must be ruled out before any hypersensitivity investigation. Skin patch tests are helpful but not always contributive since their sensitivity is not very high. In our patient, skin tests showed a doubtful reaction to nickel and chrome. Titanium is the main component of pacemaker generator, is not included in the standard battery and was very likely to be an allergen because skin reaction was only observed above the generator. Otherwise, Déry and colleagues6 suggested that titanium test is unreliable because this test is performed using titanium tetrachloride, which must be highly diluted with water and quickly hydrolyzed to insoluble titanium dioxide. A positive reaction to nickel can be found in up to 20% of the population7; therefore nickel tests should be interpreted with caution. Furthermore, skin tests reading can be extended to 72 hours to have more positive results8. A positive reaction to intracutaneous testing on the affected patient’s serum incubated with small pieces of titanium for 1 month in a patient who had negative results on patch testing to titanium was reported by Yamauchi et al.9. In our patient, clinical presentation is very suggestive of hypersensitivity and titanium is very likely to be an allergen because skin reaction was only observed above the generator (composed mainly of titanium).\n\nTreatments of contact dermatitis have been described in various case reports. There is a role for topical corticosteroid that can reduce skin symptoms, but recurrence is common10. The only complete treatment is the removal of the allergen, and the use of a hypoallergenic material. One option, as described by Syburra et al. and Andrews and Scheinman1,11, is the use of gold-coated generator. PTFE sheet coating technique was first reported in Japan8,12–14. This technique seems to be an effective method despite theoretical risk of PTFE hypersensitivity. In our patient, lack of recurrence confirms pacemaker component allergy and effectiveness of PTFE coating to prevent it.\n\n\nConclusion\n\nAlthough pacemaker allergy is a rare condition, its recognition is important considering the widespread use of pacing and defibrillation systems. Allergic reaction can occur early or be delayed. It is recommended to rule out this diagnosis in the case of pacemaker pocket inflammation without signs of an infection. A negative skin test should not exclude diagnosis. We report in the present case that removal of the system components and the use of PTFE-coated materials are effective to prevent recurrence with a long follow-up.\n\n\nConsent\n\nWritten informed consent for publication of the clinical details and images was obtained from the patient.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nSyburra T, Schurr U, Rahn M, et al.: Gold-coated pacemaker implantation after allergic reactions to pacemaker compounds. Europace. 2010; 12(5): 749–50. PubMed Abstract | Publisher Full Text\n\nRaque C, Goldschmidt H: Dermatitis associated with an implanted cardiac pacemaker. Arch Dermatol. 1970; 102(6): 646–649. PubMed Abstract | Publisher Full Text\n\nPeters MS, Schroeter AL, van Hale HM, et al.: Pacemaker contact sensitivity. Contact Dermatitis. 1984; 11(4): 214–218. PubMed Abstract | Publisher Full Text\n\nHayes DL, Loesl K: Pacemaker component allergy: case report and review of the literature. J Interv Card Electrophysiol. 2002; 6(3): 277–278. PubMed Abstract | Publisher Full Text\n\nRaja Y, Desai PV, Glennon PE: Pacemaker-mediated dermatitis. Europace. Images in Electrophysiology. 2008; 10(11): 1354. PubMed Abstract | Publisher Full Text\n\nDéry JP, Gilbert M, O’Hara G, et al.: Pacemaker contact sensitivity: case report and review of the literature. Pacing Clin Electrophysiol. 2002; 25(5): 863–865. PubMed Abstract | Publisher Full Text\n\nTorres F, das Graças M, Melo M, et al.: Management of contact dermatitis due to nickel allergy: an update. Clin Cosmet Investig Dermatol. 2009; 2: 39–48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIshii K, Kodani E, Miyamoto S, et al.: Pacemaker contact dermatitis: The effective use of a polytetrafluoroethylene sheet. Pacing Clin Electrophysiol. 2006; 29(11): 1299–1302. PubMed Abstract | Publisher Full Text\n\nYamauchi R, Morita A, Tsuji T: Pacemaker dermatitis from titanium. Contact Dermatitis. 2000; 42(1): 52–3. PubMed Abstract\n\nSkoet R, Tollund C, Bloch-Thomsen PE: Epoxy contact dermatitis due to pacemaker compounds. Cardiology. 2003; 99(2): 112. PubMed Abstract | Publisher Full Text\n\nAndrews ID, Scheinman P: Systemic hypersensitivity reaction (without cutaneous manifestations) to an implantable cardioverter-defibrillator. Dermatitis. 2011; 22(3): 161–4. PubMed Abstract\n\nTamenishi A, Usui A, Oshima H, et al.: Entirely polytetrafluoroethylene coating for pacemaker system contact dermatitis. Interact Cardiovasc Thorac Surg. 2008; 7(2): 275–277. PubMed Abstract | Publisher Full Text\n\nVodiskar J, Schnöring H, Sachweh JS, et al.: Polytetrafluoroethylene-coated pacemaker leads as surgical management of contact allergy to silicone. Ann Thorac Surg. 2014; 97(1): 328–9. PubMed Abstract | Publisher Full Text\n\nIguchi N, Kasanuki H, Matsuda N, et al.: Contact sensitivity to polychloroparaxylene-coated cardiac pacemaker. Pacing Clin Electrophysiol. 1997; 20(2 Pt 1): 372–3. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "39381",
"date": "08 Mar 2019",
"name": "Antonella Tosti",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe clinical history and the follow-up are compatible with a contact dermatitis to the Pacemaker compounds but the authors have to explain well which test they did because they wrote “skin prick tests” but probably they did “patch tests”. Furthermore, even if the available Titanium preparations for detection of type IV hypersensitivity is currently inadequate, patch testing with titanium and other metals, would be recommended.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Partly",
"responses": []
},
{
"id": "65545",
"date": "26 Jun 2020",
"name": "Zefferino Palamà",
"expertise": [
"Reviewer Expertise EHra EP specialist (electrophysiologist)"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is very important to consider patient's body mass index in order to consider an eventual mechanical/compression skin erosion. In fact a retro pectoral implant in these cases could prevent skin erosion.\n\nThat consideration should part of discussion and limitations of the case.\n\nThe work is well written and very interesting for community.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1460
|
https://f1000research.com/articles/7-1448/v1
|
11 Sep 18
|
{
"type": "Research Article",
"title": "Exploring challenges of health system preparedness for communicable diseases in Arbaeen mass gathering: a qualitative study",
"authors": [
"Arezou Karampourian",
"Zohreh Ghomian",
"Davoud Khorasani-Zavareh",
"Arezou Karampourian",
"Zohreh Ghomian"
],
"abstract": "Background: Infectious diseases are common problems in mass gatherings, especially when there is a lack of health system preparedness. Since Iran is one of the most important countries on the walking path of Arbaeen and has a vital role in providing health services to pilgrims, the experiences of health challenges by participants is of key importance. The aim of this study is to explore stakeholders’ experiences on the health system's preparedness and challenges, and to provide suggestions for preventing infectious diseases during the Arbaeen mass gathering. Methods: A qualitative research method was used with a conventional content analysis approach. The number of participants was 17, including 13 executive managers and 4 health policymakers who entered the study among participants. Semi-structured interviews were used to generate the data. Interviews were analyzed by means of content analysis after face-to-face interviews. Results: Data analysis resulted in the extraction of four main themes and 11 sub-themes. Health infrastructure defects in Iraq has three sub-themes (health abandonment in Iraq, the weaknesses in health culture and problems related to the health system); poor control of the causative factors of infectious diseases has three sub-themes (the underlying factors of the prevalence of contagious diseases, health system response to communicable diseases and ignoring the risks of the Arbaeen ceremony); the low perception of risk in pilgrims has three sub-themes (lack of awareness in pilgrims, fatalism in pilgrims and unhygienic belief in pilgrims); and the ineffectiveness of health education has two sub-themes (training shortage in the targeted group and educational content problems) that shows participant’s experiences of the health system's challenges for coping with infectious diseases during the Arbaeen ceremony. Conclusion: Pilgrim-based training, planning and controlling other challenges may change these threats to opportunities and improve the health of participants of the mass gathering of Arbaeen in the region.",
"keywords": [
"Preparedness",
"Health system",
"Infectious diseases",
"Religious mass gatherings"
],
"content": "Introduction\n\nAccording to the definition of the World Health Organization (WHO), any structured or spontaneous event leading to a certain number of people gathering in a particular site, for a specific aim in a determined period, putting pressure on the response resources and social programs, is called a mass gathering. Mass gatherings are divided into different types based on their purpose. The expansion of interconnectivity between societies and increases in the number of national and international events in communities has led to an increase in the number of mass gatherings, which, despite the benefits like cultural exchange, have health challenges such as infectious disease transmission and, should therefore be considered by health planners1.\n\nOne of the health challenges of mass gatherings is the prevalence of infectious diseases and the outbreak of diseases, which, along with, Complicated health needs of participants increases the health burden on the host country. The public health system can be under severe pressure, even with advanced equipment and the proper resources for prevention and control of infectious diseases2–4. Various factors, such as the type and location of gathering, the number of participants and the lack of access to health facilities, can affect the incidence of infectious diseases in mass gatherings. Planners must therefore pay attention to these factors in preparation1,5–7. Since a mass gathering is a collection of many people together in one particular site, the possibility of infectious disease transmission due to the high population density always exists. Studies on mass gatherings such as Hajj, Ashura day in Karbala, and Kumbh Mela and Sabarimala in India show the prevalence of infectious diseases in these ceremonies8–13.\n\nThe dates of religious ceremonies like Hajj and Arbaeen are decided using the lunar calendar (the Islamic or Hijri calendar) which is not only shorter than the Gregorian calendar, meaning that these events occur 10 days earlier each year and can synchronize with different seasons and season-associated diseases. Accordingly, planners and policy makers of public health are faced with changing goals, requiring health system preparedness14,15. One of the world’s largest religious gatherings is the Arbaeen ceremony, which happens on the 40th day after the anniversary of Imam Hussein’s martyrdom, the third Shiite Imam. In the Ashura event, pilgrims walk to Karbala, south of Baghdad. Based on the statistics of 2017, the number of Iranian pilgrims taking part in the Arbaeen ceremony was 2,320,00016.\n\nIran is a neighboring country of Iraq and shares a common land border with it. Pilgrims from other neighboring countries of Iran, such as Afghanistan and Pakistan, cross Iran to reach Iraq and the Arbaeen ceremony. On the basis of the Mutual Memorandum of Cooperation between the two countries of Iran and Iraq, Iran is committed to providing health services to other pilgrims in addition to Iranian pilgrims4. Therefore, it is necessary to have a plan for preparedness in dealing with infectious diseases during Arbaeen ceremony.\n\nIf a mass gathering is not carefully managed, it can lead to the distribution of infectious diseases. In mass gatherings, infectious diseases are threats to global health security and even the political security of countries. Therefore, planning, communication and public health supervision are important in these religious ceremonies1,8. Mass gatherings are different from structured disasters so that in case of occurrence, many people will be affected17. Since the Arbaeen ceremony is held with the presence of many pilgrims from many different countries and, like the Hajj pilgrimage, is based on the lunar calendar, and it is held in Iraq, there is the possibility of the occurrence and transmission of infectious diseases. It is therefore essential to be prepared to control and prevent these diseases. Dealing with infectious diseases in Arbaeen is considered a challenge for policy makers10. According to Arbaeen's social and cultural context, it seems essential to take a deeper look in this field. On the other hand, there is relatively little knowledge about the Arbaeen ceremony; therefore, a qualitative method for clarifying the concept and challenges of health system preparedness in Arbaeen ceremony is necessary. The aim of this study was to explore challenges of health system preparedness for communicable diseases in Arbaeen ceremony.\n\n\nMethods\n\nWe collected data from June 2017 to March 2018. Since the study attempted to explore preparedness challenges of health systems, a qualitative research method with the approach of conventional content analysis was used18. The health system's challenge in Arbaeen is multidimensional, Owing to the cultural difference between Iran and Iraq, the challenges faced by the health system during the Arbaeen pilgrimage should be investigated in both countries. Indeed, the cultural practices of the participants, especially those surrounding health, differs during the Arbaeen ceremony. Therefore, a qualitative research method, with the aim of describing phenomena, providing new knowledge, insight and a practical health guide, is the method used in this study18,19.\n\nThe study was conducted using in-depth interviews, based on stakeholder’s experiences in Iraq-Iran land terminals (Mehran, Shalamche, and Chazaba) and also in health care posts in Iraq. The interviews were conducted with health care providers and policy makers, as well as pilgrims in the Arbaeen ceremony.\n\nParticipants were chosen among executive managers and policymakers of the Ministry of Health and Medical Education, medical training and treatment and other related organizations, including the Red Crescent organization, Mobilizing the Medical Society, the Hajj and Pilgrimage Organization, medical universities in the border cities and the Social Security Organization. In total, 17 participants, consisting of 13 executive managers and 4 health policymakers, were selected in this study through purposeful sampling with the aim of exploring challenges of health system preparedness for communicable diseases in the Arbaeen ceremony. The existence of practical experience in planning or participate in Arbaeen ceremony and the ability to communicate and willingness to participate were inclusion criteria in the research. We recruited participants either by phone call or by approaching them in person.\n\n\"Maximum variety sampling\" was used to explore the experiences of the participants in those selected so that they were chosen from the Ministry of Health and Medical Education, medical training and treatment and other related organizations, including the Red Crescent organization, Mobilizing the Medical Society, the Hajj and Pilgrimage Organization, medical universities in the border cities and the Social Security Organization, with different experiences of work, education and gender. Inclusion criteria included the existence of practical experience in planning or participate in Arbaeen ceremony, the ability to communicate and willingness to participate in the research (Table 1).\n\nThe study was done through face-to-face interviews followed by telephone interviews for concept saturation. The data was collected using audio recorders with permission from the participants. AK conducted the interviews, AK and DKZ transcribed the data. AK and DKZ and ZGH coded the data. DKZ and ZGH performed rigor. Initially the first two interviews were conducted in a non-structured format, with the following 15 interviews being semi-structured. Open questions used to generate the data were developed by experienced and/or knowledgeable policy makers and health care providers. The individual’s experiences and beliefs were used without considering their specialty20. Interviews were continued until data and concept saturation were reached21. The interview duration was between 35 and 95 minutes, based on the tolerance, amount of information and desire of the participants. Interviews were performed individually and based on participants' willingness in terms of time and site.\n\nFirstly, interview questions began with the following general questions based on the participants' level and the main questions of the research: \"How was your organization preparedness plan to deal with infectious diseases in Arbaeen ceremony?\"; \"Please express your experiences of related challenges in infectious diseases in Arbaeen ceremony\"; \"What problems were in your preparedness plan?\"; “What problems did you face in the vaccination program of the health team and pilgrims in Arbaeen ceremony?”; and “What is your offer to pilgrims for a safe pilgrimage?\" Following this, exploratory questions were gradually used to clarify the concept and deepen the interview process: e.g. “Please explain further what you mean?” and “Why?” At first, the interviews were transcribed verbatim and then typed up using Microsoft Word Office.\n\nThe data were analyzed through a conventional content analysis method18. First, the main researcher converted the interviews to written texts. The digital files were listened to several times and the texts were read repeatedly. Next, the meaning units were determined based on the aim and the question of the research. Meaning units were a collection of words and sentences that were related to each other in terms of content and were grouped together. Meaning units reached the level of abstraction and conceptualization and were coded considering the research question. The key points and subjects were extracted as open codes. These codes have been put under the broader headings based on the present similarities and differences; in other words, the data were reduced in order to describe the phenomenon and gain a better understanding, and this abstraction process continued until concept extraction18. Data first emerged as meaning units, then condensed meaning units, codes, sub-themes and finally themes.\n\nThe study was approved by Ethics Committee of Shahid Beheshti University of Medical Sciences on 10/08/2017, No. IR.SBMU.RETECH.REC.1396.349. The interviews were conducted and recorded with participants' consent. Written or verbal consent was taken from participants to participate in the study. Verbal consent was only taken for telephone interviews; this was due to the distance between interviewer and interviewee. Anonymity, confidentiality and the right of resignation were informed to participants and considered during the study. The interview time was set according to participants' willingness.\n\nThe researchers used the trustworthiness criteria recommended by Guba and Lincoln to establish rigor22. All authors were engaged in the environment and field of research. In addition, the principle investigator always had suitable involvement with the participants for in-depth interviews. Credibility was established by the prolonged engagement of researchers with participants. Researcher triangulation was also used to verify the accuracy of the coding process. The research team also retained raw data, codes, and themes for control of the reliability. At the same time, sampling was carried out with maximum diversity in order to provide triangulation by means of credibility and confirmability. A detailed description of the method was used to establish transferability. Engaging participants in the research increases the interaction between researchers and participants and then the credibility22. The research supervisor monitored the data collection and data analysis process. It is necessary to mention that the research team participated in the Arbaeen ceremony as pilgrim and conducted field notes. Also, member- and peer-checking were used to ensure credibility. Therefore, many interviews with related topics were sent to some external expert reviewers and participants (policymakers of the Ministry of Health and Medical Education, medical training and treatment and the Red Crescent organization) to be checked, and they were requested to assess the degree of relevance between the findings and raw data. Moreover, transferability was established by sampling with maximum variation from various centers, including the Ministry of Health and Medical Education and other peer organizations including the Red Cross organization, the Medical Community Mobilization, the Hajj and Pilgrimage Organization, the country's medical universities in the border cities and the Social Security Organization with different experiences of work, education and gender. A completed SRQR checklist is available in Supplementary File 1.\n\n\nResults\n\nThe mean age of participants was 43 and the mean time of work experience was 18 years (Table 1). Overall, 1125 original codes were extracted and after integration using conventional concept analysis, four original themes consisting of 11 sub-themes were identified. The theme of “health infrastructure defection in Iraq” had three sub-themes: “health abandonment in Iraq”, “the weakness of health culture” and “problems related to health system”. The theme of “poor control of factors effective in infectious diseases” had three sub-themes: the “underlying factors in prevalence of contagious diseases”, “health system response to communicable diseases” and “ignoring the risks of the Arbaeen ceremony”. The theme of “low perception of risk in pilgrims” had three sub-themes: “lack of awareness in pilgrims”, “fatalism in pilgrims” and “unhygienic belief in pilgrims”. The theme of “ineffectiveness of health education” had two sub-themes: “training shortage in the targeted group” and “educational content problems” (Table 2).\n\nAccording to the findings of this study, the main theme is pilgrim-based education. It seems that the biggest issue with the prevention of communicable diseases in the Arbaeen ceremony is pilgrim-based education. Educating pilgrims can directly help people to create or reform health infrastructures in Iraq. Education can also help to identify health risks and respond to them by identifying effective factors in combating infectious diseases. The training of health instructors and guidelines from people who are trusted and accepted by pilgrims, such as missionaries and religious leaders, can have a positive impact on their beliefs. The main point of training is to determine the targeted group both at the level of pilgrims and health directors and consider the training needs of each group in preparing training guidelines.\n\nHealth abandonment in Iraq. Most of the participants believed that the Iraqi health system has been abandoned because foodstuff distribution is not under the supervision of a special organization and does not have a special trustee in the execution and monitoring of health rules or, if there is one, he is inconspicuous. The existence of a trustee or supervisor in the health system can prevent the delivery and distribution of unsafe food and reduce the prevalence of infectious diseases. Unhealthy foodstuff preparation, production and distribution, and a lack of food evaluation and supervision system can lead to gastroenteric disease. Based on the participants’ experiences, the lack of a health system trustee means system weakness.\n\nThe following quotation is an example of the above: \"Sometimes donations are prepared in an unhealthy manner and so lead to acute digestive problems… The health system is weak in Iraq because health rules are not enforced and there is no supervision for these centers…\" (Executive manager, male, 30–40 years, 11–20 years job experience).\n\nThe weakness of health culture. According to the participants' point of views, observation of unhealthy behaviors, such as neglect of the individual and public health standards, and the existence of cultural differences between Iran and Iraq, are considered by pilgrims to bring about an unsafe culture. Policy makers and executives should be familiar with the kind of health culture in Iraq so that they can develop a program to prevent contagious diseases. In Iranian culture, the non-use of spoons and forks is considered as lack of health belief and neglect of health, whereas in Iraq, eating food with the hands is part of the food culture. Iran's health system can help train the Iraqi people in sanitary practices alongside Iranian pilgrims.\n\nThe following quotation is an example of the above: \"Some food providers don’t meet health … The culture of using spoons and forks for food serving is different in Iran and Iraq… one of the cultural works which we can do in Iraq is health education…\" (Executive manager, male, 41–50 years, 11–20 years job experience).\n\nProblems related to health system. Most participants acknowledged that despite annual health care improvements in Iraq, there are also some shortages in this field due to the lack of a long history of a health service system. Health system weakness, incomplete health service implementation and insufficient supervision of environment health are indicative of the weakness of the health infrastructure; this has made it impossible to provide environmental health, sanitation and waste disposal.\n\nThe following quotation is an example of the above: “Health background in Iraq was poor… There were no waste bins and waste sanitation there… failure to implement health services has led to failure in meeting health conditions … there was no health system supervision and waste collection system… of course every year its gets better than the previous year … \" (Executive manager, female, 41–50 years, 21–30 years job experience).\n\nThe underlying factors in prevalence of contagious diseases. Most participants have identified underlying factors of infectious diseases as one of the challenges affecting the preparedness of the health system in dealing with infectious diseases. Various factors, such as population density and diversity, not paying attention to the principles of personal and general health, for example through a lack of health facilities, weather conditions during the trip and changes in nutrition can cause the spread of infectious diseases. Identifying these factors helps pilgrims and planners to prevent infectious diseases.\n\nThe following quotation is an example of the above: \"There were few toilets or no healthy facilities. Somehow, pilgrims would have to sleep in the desert or in a limited space with a lot of people… Congestion of pilgrims from different countries increases the risk of spreading infectious diseases…” (Executive manager, male, 51–60 years, Over 30 year job experience).\n\nHealth system response to communicable diseases. On the basis of the view of participants, mass population movements from different countries and their gathering with different population diversity can transfer and spread endemic diseases and also emerging diseases such as plague and anthrax. There is also the possibility of bioterrorism events occurring during the Arbaeen ceremony. The health system needs facilities such as equipped laboratories to diagnose and treat infectious diseases in a timely manner. The inaccessibility or lack of experimental equipment for disease identification leads to the failure of timely diagnosis of diseases, a lack of disease control and finally the incidence of epidemics. Syndromic surveillance system can be used to diagnose infectious diseases. There is a need to correctly pass the treatment period of infectious diseases in order to prevent epidemics, although drug shortages can affect treatment completion and is one of the causes of epidemics. Vaccination also helps to prevent infectious diseases, but the requirement of pilgrims to vaccinate is always up to the host country and since vaccination is not one of Iraq's priorities, the Ministry of Health and Medical Education can only advise pilgrims to vaccinate.\n\nThe following quotation is an example of the above: \"There is the probability of spread of local and new-appeared diseases and even bioterrorism due to the gathering of pilgrims from different countries… some diseases can't be diagnosed due to lack of facilities but syndromic surveillance system can be used. …the large number of pilgrims and lack of medication have led to not having complete course of antibiotic treatment… The need for vaccination is one of the requirements of the host country\". (Policy makers, male, 41–50 years, 21–30 years job experience).\n\nIgnoring the risks of the Arbaeen ceremony. The Arbaeen pilgrimage has special features that distinguish it from other mass gatherings. Participation of different peoples with a diverse range of socio-demographic statuses, cultures and nationalities makes this distinction. The Arbaeen pilgrimage has some specific hazards like other trips that are sometimes neglected. The population of Arbaeen pilgrims and its time of year are changeable. Financial management of travel expenses at the ceremony is carried out by volunteers. Considering these features is essential for the readiness program. In the opinion of the majority of participants, one of the challenges of the health system is the lack of attention to the risks of Arbaeen ceremony and the lack of planning based on these features.\n\nThe following quotation is an example of the above: \"Arbaeen is a new phenomenon that children and adults, men and women with different cultures and ethnicities take part in the ceremony … Even with the knowledge of the dangers of the route, people attend Arbaeen ceremony… Arbaeen is a spontaneous and popular event and doesn’t cost much … the population is moving and the time of the ceremony changes every year …\" (Executive manager, male, 30–40 years, 21–30 years job experience).\n\nLack of awareness in pilgrims. One of the challenges in the view of the participants, especially executives, is the pilgrims' low awareness of health hazards, such as not using personal hygiene products and non-compliance with health standards. Unhealthy and dangerous practices, such as unsanitary food consumption, are not understood by pilgrims, and this low awareness and inadequate knowledge of the risks are the causes of infectious diseases in the pilgrims. Respect for personal and public health, such as hand washing, providing food from health food centers and avoiding overeating, is effective in preventing digestive diseases.\n\nThe following quotation is an example of the above: \"Some pilgrims do not wash their hands or do not use personal hygiene products …Or they do not get foods from healthy centers … overeating and then digestive problems are one of the pilgrims' problems … the other problem is not being familiar with Arabic language …\" (Executive manager, female, 30–40 years, less than 10 years job experience).\n\nFatalism in pilgrims. Participants believed that one of the health challenges is belief in destiny and fatalism; so that most pilgrims who participate in the Arbaeen walking ceremony, based on their belief in destiny, have a relatively low understanding of the dangers and diseases, and begin their trip merely confident in Allah and without any plan for dealing with infectious diseases, such as vaccination, and continue the way using food and drinks that are often unhealthy.\n\nThe following quotation is an example of the above: \"It's enough to decide to travel and you do not need to have a special plan … If you think openly, you won’t get sick, and nothing bad will happen…\" (Policy makers, male, 41–50 years, 21–30 year job experience).\n\nUnhygienic beliefs in pilgrims. Based on the participants’ experiences, low perception of danger in pilgrims is sometimes seen as insanitary beliefs in preventing medication consumption and disregarding hygiene recommendations. Believing in the use of medication during illness, preventing self-curing and trying to abide by hygiene recommendations prevents infectious diseases.\n\nThe following quotation is an example of the above: \"Sometimes we see pilgrims using traditional treatments instead of antibiotics consumption… they don’t pay attention to hygiene recommendations such as masking and not using suspicious food…\" (Executive manager, male, 30–40 years, 11–20 year job experience).\n\nTraining shortage in the targeted group. Based on stakeholders' views, one of the present challenges is the quantitative and qualitative shortage in the stakeholder’s training level. This means that providing a personal and public health plan should cover all stakeholders, including executives, policymakers and volunteers in the Arbaeen ceremony, and be considered with respect to each participant. Furthermore, the timing of training is also important and should be before the days of Arbaeen ceremony in order to have a greater effect on the individuals' knowledge. Indeed, training courses should be held separately for each of the groups, pilgrims, executive managers, and policymakers, and at a proper time before Arbaeen.\n\nThe following quotation is an example of the above: “Training should not exactly be in the days of Arbaeen. Personal and public health training must be held several months before Arbaeen… the training is not only for pilgrims, but also anyone involved in Arbaeen ceremony. Everyone should be trained from pilgrims to policymakers…\" (Policy maker, male, 41–50 years, 21–30 year job experience).\n\nEducational content problems. Another problem was the provision of educational content. Most of the participants believed that training should be fitted with pilgrims' needs and respond to the problems of pilgrims. Pilgrims should be divided into different groups based on the level of education, their problems and illnesses, and individual and general education should be planned accordingly. Participants also believed that training in the area of personal, general and nutritional health should be provided more comprehensively.\n\nThe following quotation is an example of the above: \"We should divide pilgrims to diverse groups and send targeted training messages to each group, not the same training for everyone…the benefactors should be trained…pilgrims should have more training…\" (Policy maker, male, 41–50 years, 21–30 year job experience).\n\n\nDiscussion\n\nThe aim of this study was to explore the challenges of health system preparedness for infectious diseases in the Arbaeen ceremony as the first qualitative study in Iran. The most important findings of the study are the ineffectiveness of health training, the low perception of risk in pilgrims, poor control of the causative factors of infectious diseases, and deficient and defective health infrastructure in Iraq. Based on the views of the majority of participants, pilgrim-based training is the most effective factor in health system readiness in dealing with infectious diseases in the Arbaeen ceremony.\n\nIneffectiveness of health training is one of the challenges of health system preparedness. One of the plans which should be considered to ensure Arbaeen ceremony preparedness is health training. Educational planning must be done before holding the Arbaeen ceremony, with consideration of the training content and targeted groups. Indeed, according to the needs of the targeted group, training must be given to pilgrims, executives and volunteer treatment teams, and the training content for pilgrims should be different to that of executives and policy makers. Past conducted studies of religious gatherings such as the Hajj in Saudi Arabia and Ashura Day in Iraq, as well as other mass gatherings, such as the Tamworth Country Music Festival, Australia, indicate the reality that a crowd of people from diverse nations and cultures is a source of infectious disease; disease transmission is one of the most important challenges of public health in these kinds of events, so using training strategies in relation to hand washing, masking and vaccination is an important factor in preventing the diseases and health improvement8,14,16,23–33. Since religious ceremonies are rooted in people's beliefs and have great popularity among people, people-centered education therefore has great potential to reduce the gap between knowledge and practice in pilgrims, empowering individuals and enhancing their ability to deal with health threats. This goal is achieved by identifying accurate and targeted needs, developing relevant content and through educational planning34. In this study, like other studies on mass gatherings, a personal health training plan, such as the importance of hand washing, using healthy food and masking, should be included in the pilgrims' training plan, and public health training such as vaccination, environmental health, monitoring donations, cooking and distribution, controlling bioterrorism and establishing mobile toilets should be considered in executives and policymakers' preparedness plan. Regarding the aim of holding the Arbaeen ceremony, which is a popular religious–ideological ceremony, training can be performed in mosques before the ceremony by clergy.\n\nA low understanding of the risks of infectious diseases in pilgrims is one of the other challenges mentioned by the majority of participants, especially policymakers. A lack of risk understanding refers to the inability to identify and respond to dangerous situations. Top documents such as Sendai framework35 have identified that understanding risks is the first priority to decrease the incidence of disasters, and that it needs people-based and vast preventive approaches. Besides this, the Hyogo framework and sustainable development goals emphasize the role of training in increasing risk understanding and decreasing the vulnerability of individuals to hazards35–37. In this study, one of the health issues was fatalism and a low understanding of risk. A study concerning the beliefs and methods of infection control in Hajj pilgrims residing in Australia also showed that the majority of participants had low understanding of the occurrence of respiratory infections and the need for an influenza vaccine in Hajj, and refused the vaccine, using trust in Allah as an excuse and belief in destiny when dealing with the risk of disease38. Another problem is also the prevalence of self-treatment among pilgrims. In a study aimed at assessing the knowledge, attitude and the performance of Australian pilgrims on using antibiotics in Hajj showed that they did not have proper understanding of using these drugs and used medications arbitrarily, meaning that more training on proper use is needed39. A lack of understanding of health instructions and disregard for public and personal health in any situation can endanger human health. It seems that the fatalism of pilgrims with regards to diseases increases their vulnerability if the understanding of danger is reduced. Islamic instructions state that the person is obliged to preserve his health and life in any location and position, even in holy lands, and to avoid risks40. Given that Arbaeen is a religious gathering, religious leaders have an important position in ceremony implementation and can affect the pilgrims' beliefs and understanding of risks during pilgrimage. The influence of religious leaders on the people's beliefs can provide the opportunity to promote health. We should consider that religious scholars must be trained first and then transfer health instructions and methods of infectious disease control to the people in cultural and religious gatherings such as mosques.\n\nAnother problem in this field is the poor control of the effective factors on infectious diseases. One of the most common causes of infectious disease is the epidemiology triangle, which has three components—pathogen, host and environment. The interaction of these agents causes infectious diseases, and considering these factors can help control infectious diseases41. Considering the three factors in Arbaeen is very important. The ‘host’ factor includes different pilgrims with diverse cultures and nationalities. The ‘environment’ factor includes holding the ceremony in Iraq, which, due to the many years of internal and external conflicts, has had little attention paid to its health infrastructure, and also the overall environmental factors effective in the occurrence of infectious diseases. Different studies1,8,42 have shown that factors such as crowd size, equipment, climate, the event duration and location, the type of ceremony, and features and behavior of participants affect the occurrence of diseases in mass gatherings, and planners must consider it during preparations. To prevent the occurrence of contagious diseases and their consequences, comprehensive planning, rapid diagnosis and effective management is required1,8,42. One of the factors that affects the occurrence of disease is the time of year that the Arbaeen ceremony is held. Arbaeen ceremony is held based on the lunar calendar and so its needs and challenges differ and are dependent on the season that the ceremony is held, so that if the ceremony is held in cold seasons, respiratory diseases are more common and if is held in hot seasons, the majority of infectious diseases are of the digestive system. Additionally, population movement among different countries leads to the transfer of local diseases, so health considerations and cooperation between states are needed.. Policymakers should consider three components—pathogen, host and environment—before the beginning of Arbaeen mass gathering. It is also necessary that the health system is aware of all possible scenarios and the methods for dealing with them. The preparedness plan at the local level includes assessing the risk, resources capacity, equipment, surveillance system and an expert team for providing services to pilgrims.\n\nHealth infrastructure defects in Iraq was another challenge to the health system from the participants' perspective. The term ‘health infrastructures’ refers to health facilities and their related factors. The infrastructure includes staff instructions, processes and the development of systematic approaches related to personnel resources and medical support plans43. Various studies indicate that readiness for structured mass gatherings depends on investing in health infrastructures and the size of gatherings, and strengthening infrastructures and post-event coordination of mass gatherings must be continued. The inappropriate location of gatherings, the weakness of facilities and the lack of infrastructure increase the vulnerability of communities44,45. The remoteness of health facilities and a lack of needed road infrastructure can make medical services and emergency assistance ineffective. Limitation of infrastructure and medical care system, increase the incidence of injuries44,45. Arbaeen is held in a country that has long been involved with interior and exterior wars; therefore, it seems that due to economic difficulties, it does not have the capacity to support the necessary health infrastructure required by pilgrims. Although according to the participants, the health system in Iraq seems somewhat weak, given that Arbaeen ceremony is of particular popularity among Shia Muslims and is held annually, therefore, some of the activities serving pilgrims and its management during the Arbaeen ceremony is voluntarily conducted. In recent years, numerous health facilities have been constructed on religious places, as well as along the path of pilgrimage using pilgrims' donations. Management of pilgrims’ donations can help to build and maintain health infrastructure in Iraq. With this policy, the Iranian pilgrims will benefit from the Arbaeen ceremony, and the level of health in the region will be improved.\n\n\nLimitations and strengths of the study\n\nThis study is the first qualitative study on the experiences of Arbaeen, so it provides rich information in this regard; however, since the results have been collected from semi-structured interviews, it is considered subjective. It is recommended that in future studies, by creating a quantitative instrument for examining the challenges and measuring, the subjective concepts can be objectively transformed and analyzed. The present study can be used as a basis for this purpose.\n\n\nConclusion\n\nThe ineffectiveness of health training, low perception of risk in pilgrims, poor control of the effective factors on infectious diseases and deficient health infrastructure in Iraq are important challenges of the health system in dealing with contagious diseases in the Arbaeen ceremony from the stakeholder’s perspective. Therefore, pilgrim-based educational planning, along with the control of other challenges, represents an opportunity to improve the health of pilgrims taking part in the Arbaeen ceremony.\n\n\nData availability\n\nThe full data for this study are not provided because the transcripts of the interviews contain identifiable and sensitive information. Researchers can apply to access limited de-identified transcripts of interviews from the first author, Arezou Karampourian (a.karampourian@sbmu.ac.ir) under no strict conditions. Please note that transcripts are only available in Persian.",
"appendix": "Grant information\n\nThe study was supported by the Shahid Beheshti University of Medical Sciences, Tehran, Iran.\n\n\nAcknowledgements\n\nThis study is part of a PhD thesis. The authors thank Shahid Beheshti University of Medical Sciences for approving the study, as well as all participants in this study who participated in the study, despite being busy.\n\n\nSupplementary material\n\nSupplementary File 1. Completed SRQR checklist.\n\nClick here to access the data.\n\n\nReferences\n\nWorld Health Organization: Public health for mass gatherings: Key considerations. 2015. Reference Source\n\nMemish ZA, McNabb SJ, Mahoney F, et al.: Establishment of public health security in Saudi Arabia for the 2009 Hajj in response to pandemic influenza A H1N1. Lancet. 2009; 374(9703): 1786–91. PubMed Abstract | Publisher Full Text\n\nMemish ZA, Stephens GM, Steffen R, et al.: Emergence of medicine for mass gatherings: lessons from the Hajj. Lancet Infect Dis. 2012; 12(1): 56–65. PubMed Abstract | Publisher Full Text\n\nAl-Tawfiq JA, Memish ZA: Mass gathering medicine: 2014 Hajj and Umra preparation as a leading example. Int J Infect Dis. 2014; 27: 26–31. PubMed Abstract | Publisher Full Text\n\nGautret P, Yong W, Soula G, et al.: Incidence of Hajj-associated febrile cough episodes among French pilgrims: a prospective cohort study on the influence of statin use and risk factors. Clin Microbiol Infect. 2009; 15(4): 335–40. PubMed Abstract | Publisher Full Text\n\nRazavi SM, Sabouri-Kashani A, Ziaee-Ardakani H, et al.: Trend of diseases among Iranian pilgrims during five consecutive years based on a Syndromic Surveillance System in Hajj. Med J Islam Repub Iran. 2013; 27(4): 179–85. PubMed Abstract | Free Full Text\n\nMaza I, Caballero F, Capitán J, et al.: Experimental results in multi-UAV coordination for disaster management and civil security applications. Journal of intelligent & robotic systems. 2011; 61(1–4): 563–85. Publisher Full Text\n\nMemish ZA, Zumla A, Alhakeem RF, et al.: Hajj: infectious disease surveillance and control. Lancet. 2014; 383(9934): 2073–82. PubMed Abstract | Publisher Full Text\n\nJoseph JK, Babu N, Dev KA, et al.: Identification of potential health risks in mass gatherings: a study from Sabarimala pilgrimage, Kerala, India. Int J Disaster Risk Reduct. 2016; 17: 95–9. Publisher Full Text\n\nAl-Lami F, Al-Fatlawi A, Bloland P, et al.: Pattern of morbidity and mortality in Karbala hospitals during Ashura mass gathering at Karbala, Iraq, 2010. East Mediterr Health J. 2013; 19 Suppl 2: S13–8. PubMed Abstract | Publisher Full Text\n\nAlqahtani AS, Wiley KE, Tashani M, et al.: Exploring barriers to and facilitators of preventive measures against infectious diseases among Australian Hajj pilgrims: cross-sectional studies before and after Hajj. Int J Infect Dis. 2016; 47: 53–9. PubMed Abstract | Publisher Full Text\n\nGautret P, Steffen R: Communicable diseases as health risks at mass gatherings other than Hajj: what is the evidence? Int J Infect Dis. 2016; 47: 46–52. PubMed Abstract | Publisher Full Text\n\nCariappa MP, Singh BP, Mahen A, et al.: Kumbh Mela 2013: Healthcare for the millions. Med J Armed Forces India. 2015; 71(3): 278–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAhmed QA, Arabi YM, Memish ZA: Health risks at the Hajj. Lancet. 2006; 367(9515): 1008–15. PubMed Abstract | Publisher Full Text\n\nAbubakar I, Gautret P, Brunette GW, et al.: Global perspectives for prevention of infectious diseases associated with mass gatherings. Lancet Infect Dis. 2012; 12(1): 66–74. PubMed Abstract | Publisher Full Text\n\nAhmed QA, Barbeschi M, Memish ZA: The quest for public health security at Hajj: the WHO guidelines on communicable disease alert and response during mass gatherings. Travel Med Infect Dis. 2009; 7(4): 226–30. PubMed Abstract | Publisher Full Text\n\nArbon P: The development of conceptual models for mass-gathering health. Prehosp Disaster Med. 2004; 19(3): 208–12. PubMed Abstract | Publisher Full Text\n\nElo S, Kyngäs H: The qualitative content analysis process. J Adv Nurs. 2008; 62(1): 107–15. PubMed Abstract | Publisher Full Text\n\nGhafari S, Fallahi-Khoshknab M, Norouzi K, et al.: Experiences of hospitalization in patients with multiple sclerosis: A qualitative study. Iran J Nurs Midwifery Res. 2014; 19(3): 255–61. PubMed Abstract | Free Full Text\n\nYaghmaei F, Mohammadi S, Majd HA: Developing and Measuring Psychometric Properties of “ Quality of Life Questionnaire in Infertile Couples. Int J Community Based Nurs Midwifery. 2013; 1(4): 238–45. Reference Source\n\nPolit DF, Hungler BP, Beck CT: Essentials of nursing research: Methods, appraisal, and utilization. 2006. Reference Source\n\nGuba EG, Lincoln YS: Fourth generation evaluation. Sage 1989. Reference Source\n\nRazumovskaya EM, Mishakin TS, Popov ML, et al.: Medical services during the XXVII world summer universiade 2013 in Kazan. Mediterr J Soc Sci. 2014; 5(18): 17. Publisher Full Text\n\nZumla A, Saeed AB, Alotaibi B, et al.: Tuberculosis and mass gatherings-opportunities for defining burden, transmission risk, and the optimal surveillance, prevention, and control measures at the annual Hajj pilgrimage. Int J Infect Dis. 2016; 47: 86–91. PubMed Abstract | Publisher Full Text\n\nAlqahtani AS, Wiley KE, Willaby HW, et al.: Australian Hajj pilgrims’ knowledge, attitude and perception about Ebola, November 2014 to February 2015. Euro Surveill. 2015; 20(12): pii: 21072. PubMed Abstract | Publisher Full Text\n\nTabatabaei A, Mortazavi SM, Shamspour N, et al.: Health Knowledge, Attitude and Practice Among Iranian Pilgrims. Education. 2015; 61(7): 4.5.\n\nTashani M, Alfelali M, Barasheed O, et al.: Australian Hajj pilgrims’ knowledge about MERS-CoV and other respiratory infections. Virol Sin. 2014; 29(5): 318–20. PubMed Abstract | Publisher Full Text\n\nEmamian MH, Mohammadi GM: An Outbreak of Gastroenteritis Among Iranian Pilgrims of Hajj during 2011. Iran Red Crescent Med J. 2013; 15(4): 317–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPolkinghorne BG, Massey PD, Durrheim DN, et al.: Prevention and surveillance of public health risks during extended mass gatherings in rural areas: the experience of the Tamworth Country Music Festival, Australia. Public Health. 2013; 127(1): 32–8. PubMed Abstract | Publisher Full Text\n\nBenkouiten S, Brouqui P, Gautret P: Non-pharmaceutical interventions for the prevention of respiratory tract infections during Hajj pilgrimage. Travel Med Infect Dis. 2014; 12(5): 429–42. PubMed Abstract | Publisher Full Text\n\nKhan NA, Ishag AM, Ahmad MS, et al.: Pattern of medical diseases and determinants of prognosis of hospitalization during 2005 Muslim pilgrimage Hajj in a tertiary care hospital. A prospective cohort study. Saudi Med J. 2006; 27(9): 1373–80. PubMed Abstract\n\nBakhsh AR, Sindy AI, Baljoon MJ, et al.: Diseases pattern among patients attending Holy Mosque (Haram) Medical Centers during Hajj 1434 (2013). Saudi Med J. 2015; 36(8): 962–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMortazavi SM, Torkan A, Tabatabaei A, et al.: Diseases Led to Refer Iranian Pilgrims From Hajj in 2012. Iran Red Crescent Med J. 2015; 17(7): e12860. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSolaimani-kheiraldin M, Alizadeh-Mobasher T, Karamoz M, editors: public education. The 16th National Conference on Environmental Health in Iran. 2013; Iran.\n\nAssembly UG: The Sendai Framework for Disaster Risk Reduction 2015–2030. Resolution A/Res/69/283, 2015; 1516716. Reference Source\n\nGriggs D, Stafford-Smith M, Gaffney O, et al.: Policy: Sustainable development goals for people and planet. Nature. 2013; 495(7441): 305–7. PubMed Abstract | Publisher Full Text\n\nISDR U: Hyogo framework for action 2005-2015: building the resilience of nations and communities to disasters. Extract from the final report of the World Conference on Disaster Reduction (A/CONF 206/6); 2005. Reference Source\n\nAlqahtani AS, Sheikh M, Wiley K, et al.: Australian Hajj pilgrims' infection control beliefs and practices: Insight with implications for public health approaches. Travel Med Infect Dis. 2015; 13(4): 329–34. PubMed Abstract | Publisher Full Text\n\nAzeem M, Tashani M, Barasheed O, et al.: Knowledge, Attitude and Practice (KAP) Survey Concerning Antimicrobial Use among Australian Hajj Pilgrims. Infect Disord Drug Targets. 2014; 14(2): 125–32. PubMed Abstract | Publisher Full Text\n\nKarampourian A, Khorasani-Zavareh D, Ghomiyan Z: Fatalism at the Arbaeen Ceremony. Journal of Safety Promotion and Injury Prevention. 2017; 5(4): 184–1. Reference Source\n\nMalek Afzali H, Fotouhi A, Majdzadeh SR: Methodology of Applied Research in Medical Sciences. 2005.\n\nGhodsi H, Khorasani Zavareh D, Khodadadizadeh A, et al.: Letter to Editor: Mortality Trends of Pilgrims in Hajj: An Implication for Establishment of Surveillance System. Health in Emergencies and Disasters Qurterly. 2017; 2(4): 163–4. Publisher Full Text\n\nLund A, Gutman SJ, Turris SA: Mass gathering medicine: a practical means of enhancing disaster preparedness in Canada. CJEM. 2011; 13(4): 231–6. PubMed Abstract | Publisher Full Text\n\nThackway S, Churches T, Fizzell J, et al.: Should cities hosting mass gatherings invest in public health surveillance and planning? Reflections from a decade of mass gatherings in Sydney, Australia. BMC Public Health. 2009; 9(1): 324. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIlliyas FT, Mani SK, Pradeepkumar AP, et al.: Human stampedes during religious festivals: A comparative review of mass gathering emergencies in India. Int J Disaster Risk Reduct. 2013; 5: 10–8. Publisher Full Text"
}
|
[
{
"id": "38175",
"date": "03 Oct 2018",
"name": "Mahnaz Khatiban",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting and well-written piece of work. Infectious disease prevention strategies in mass gatherings, especially religious or cultural gathering, need to start with such the qualitative study. Having a predetermined program, implementation, evaluation, and upgrade of the strategies in preventing and responding to communicable diseases requires the proper recognition of the contextual factors and challenges, experiences and perception of healthcare providers and decision-makers of the key agencies involved. The authors carried out a qualitative study with conventional content analysis approach about the Arbaeen ceremony, a religious mass gathering in Iran and Iraq and found four main themes: 1- health infrastructure defects in Iraq, 2- poor control of the causative factors of infectious diseases, 3- the low perception of risk in pilgrims, and 4- ineffectiveness of health education. Emerging of All four themes has been verified by the participants’ comments which made results acceptable. The method of analysis is appropriately explained which can facilitate the reproducibility. The authors conclude that the main factor in preventing communicable disease is pilgrim-based training long before the ceremony. This conclusion is adequately supported by the results.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "38176",
"date": "12 Nov 2018",
"name": "Abbas Ostadtaghizadeh",
"expertise": [
"Reviewer Expertise disaster and emergency health"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript is drafted very well. So, I would like to thank all authors for their attempts. Here under, There are some comments for improvement the quality of the manuscript which I hope to be helpful.\n\nIn the last paragraph of introduction, there are redundant phrases which can be edited. It is better the manuscript mentions some challenges there were in similar mass gathering in the world. In Methods, the authors mentioned that they have collected data from June 2017 to March 2018, while the study settings have been only health care posts in Iraq and Iran-Iraq borders. I think these posts were active only during Arbaeen period (Oct - Nov. 2017). The second paragraph of participant selection part is very similar to the first one (redundant). In addition, the correct name of organizations should be mentioned. for example, Iranian Red Crescent Society or Medical Society of Basij. Some English editions might be needed. In the discussion part, it is better to explain statistically how education and other interventions can influence on controlling of communicable diseases based on past researches. Mentioning the importance of interventions such as hand-washing and .... is not a new matter. It is better to discuss the results respectively as the results mentioned. So, the reader can insure that all results have considered in the discussion part. It is better to explain what pilgrim-based education is and how it controls CD in discussion part and conclusion as well.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "42004",
"date": "27 Feb 2019",
"name": "Harunor Rashid",
"expertise": [
"Reviewer Expertise Mass Gathering Medicine",
"Travel Medicine",
"Vaccine Preventable Diseases"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper is interesting and adds to the medical literature of Arbaeen Ceremony which has been poorly addressed from a public health research perspective. We hope this paper will serve as a key reference on health aspect of Arbaeen Ceremony in near future. A few minor issues should be addressed before the manuscript is published.\nThe reference list needs to be revised, we note some misplacement error. For example, in the third paragraph of the introduction (p 3), “Based on the statistics of 2017, the number of Iranian pilgrims taking part in the Arbaeen ceremony was 2,320,000”, the reference given for this is number 16. However, ref 16 (Ahmed et al.) was published 8 years ago in 2009. Also in the paragraph that follows, “on the basis of the Mutual Memorandum of Cooperation between the two countries of Iran and Iraq, Iran is committed to providing health services to other pilgrims in addition to Iranian pilgrims”, the reference cited is number 4, which is about Hajj and Umrah and cannot the correct reference. Repetition is another issue, some information are repeated unnecessarily. The authors may consider language editing. A useful reference that the authors may consider quoting is--Al-Lami F, et al. Pattern of morbidity and mortality in Karbala hospitals during Ashura mass gathering at Karbala, Iraq, 20101. Two other relevant references are—a) Alqahtani AS, et al. Australian Hajj pilgrims' infection control beliefs and practices: Insight with implications for public health approaches2; b) Rahman J, et al. Mass Gatherings and Public Health: Case Studies from the Hajj to Mecca3.\nIn summary this is a indexable manuscript with minor revision.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1448
|
https://f1000research.com/articles/7-272/v1
|
05 Mar 18
|
{
"type": "Software Tool Article",
"title": "Taxa: An R package implementing data standards and methods for taxonomic data",
"authors": [
"Zachary S.L. Foster",
"Scott Chamberlain",
"Niklaus J. Grünwald",
"Zachary S.L. Foster",
"Scott Chamberlain"
],
"abstract": "The taxa R package provides a set of tools for defining and manipulating taxonomic data. The recent and widespread application of DNA sequencing to community composition studies is making large data sets with taxonomic information commonplace. However, compared to typical tabular data, this information is encoded in many different ways and the hierarchical nature of taxonomic classifications makes it difficult to work with. There are many R packages that use taxonomic data to varying degrees but there is currently no cross-package standard for how this information is encoded and manipulated. We developed the R package taxa to provide a robust and flexible solution to storing and manipulating taxonomic data in R and any application-specific information associated with it. Taxa provides parsers that can read common sources of taxonomic information (taxon IDs, sequence IDs, taxon names, and classifications) from nearly any format while preserving associated data. Once parsed, the taxonomic data and any associated data can be manipulated using a cohesive set of functions modeled after the popular R package dplyr. These functions take into account the hierarchical nature of taxa and can modify the taxonomy or associated data in such a way that both are kept in sync. Taxa is currently being used by the metacoder and taxize packages, which provide broadly useful functionality that we hope will speed adoption by users and developers.",
"keywords": [
"R language",
"taxonomy",
"taxa",
"R package",
"rOpenSci",
"metacoder",
"taxize"
],
"content": "Introduction\n\nThe R statistical computing language is rapidly becoming the leading tool for scientific data analysis in academic research programs (https://stackoverflow.blog/2017/10/10/impressive-growth-r/). R or its extensions were cited by almost 1% of all scientific articles in 2014 according to Elsevier’s Scopus database. For the agricultural and biological sciences, R was cited by over 3% of articles (Tippmann, 2015). One of the reasons for R’s popularity is how easy it is to develop and install extensions called R packages. There are now more than 10,000 packages on the Comprehensive R Archive Network (CRAN), over 1,300 packages on Bioconductor (Gentleman et al., 2004), and countless more on GitHub.\n\nThe recent increases in the affordability and effectiveness of high-throughput sequencing has led to a large number of ecological datasets of unprecedented size and complexity. The R community has responded with the creation of numerous packages for ecological data analysis and visualization, such as vegan (Oksanen et al., 2013), phyloseq (McMurdie & Holmes, 2013), taxize (Chamberlain & Szöcs, 2013), and metacoder (Foster et al., 2017). Taxonomic information is often associated with these large data sets and each package encodes this information differently. Since each package tends to have a unique focus, it is common to use multiple packages on the same data set, but converting between formats can be difficult. Considering how recently these large taxonomic data sets have become commonplace, it is likely that many more packages that use taxonomic information will be created.\n\nWithout a common data standard, using multiple packages with the same data set requires constant reformatting, which complicates analyses and increases the chance of errors. Package maintainers often add functions to convert between the formats of other popular packages, but this practice will become unsustainable as the number of packages dealing with taxonomic data increases. Even if a conversion function exists, doing the conversion can significantly increase the time needed to analyze very large data sets, like those generated by high-throughput sequencing. In addition, not all formats accommodate the same types of information, so conversion can force a loss of information.\n\nTaxa is a new R package that defines classes and functions for storing and manipulating taxonomic data. It is meant to provide a solid foundation on which to build an ecosystem of packages that will be able to interact seamlessly with minimal hassle for developers and users. The classes in taxa are designed to be as flexible as possible so they can be used in all cases involving taxonomic information. Complexity ranges from simple, low level classes used to store the names of taxa, ranks, and databases to high-level classes that can store multiple data sets associated with a taxonomy. In particular, the taxmap class is designed to hold any type of arbitrary, user-defined data associated with taxonomic information, making its applications limitless. In addition to the classes, there are associated functions for manipulating data based on the dplyr philosophy (Wickham & Francois, 2015). These functions provide an intuitive way of filtering and manipulating both taxonomic and user-defined data simultaneously.\n\n\nMethods\n\nThe basic classes. Taxa defines some basic taxonomic classes and functions to manipulate them (Figure 1). The goal is to use these as low-level building blocks that other R packages can use. The database class stores the name of a database and any associated information, such as a description, its URL, and a regular expression matching the format of valid taxon identifiers (IDs). The classes taxon_name, taxon_id, and taxon_rank store the names, IDs, and ranks of taxa and can include a database object indicating their source. All of the classes mentioned so far can be replaced with character vectors in the higher-level classes that use them. This is convenient for users who do not have or need database information. However, using these classes allows for greater flexibility and rigor as the package develops; new kinds of information can be added to these classes without affecting backwards compatibility and the database objects stored in the taxon_name, taxon_id, and taxon_rank classes can be used to verify the integrity of data, even if data from multiple databases are combined. These classes are used to create the taxon class, which is the main building block of the package. It stores the name, ID, and rank of a taxon using the taxon_name, taxon_id, and taxon_rank classes. The taxa class is simply a list of taxon objects with a custom print method.\n\nDiamond-tipped arrows indicate that objects of a lower class are used in a higher class. For example, a database object can be stored in the taxon_rank, taxon_name, or taxon_id objects. A standard arrow indicates that the lower class is inherited by the higher class. For example, the taxmap class inherits the taxonomy class. An asterisk indicates that an object (e.g. a database object) can be replaced by a simple character vector. A question mark indicates that the information is optional.\n\nThe hierarchy and taxonomy classes. The taxon class is used in the hierarchy and taxonomy classes, which store multiple taxa (Figure 1). The hierarchy class stores a taxonomic classification composed of nested taxa of different ranks (e.g. Animalia, Chordata, Mammalia, Primates, Hominidae, Homo, sapiens). The hierarchies class is simply a list of hierarchy objects with a custom print method. The taxonomy class stores multiple taxa in a tree structure representing a taxonomy. The individual taxa are stored as a list of taxon objects and the tree structure is stored as an edge list representing subtaxa-supertaxa relationships. The edge list is a two-column table of taxon IDs that are automatically generated for each taxon. Using automatically generated taxon IDs, as opposed to taxon names, allows for multiple taxa with identical names. For example, “Achlya” is the name of an oomycete genus as well as a moth genus. It is also preferable to using taxon IDs from particular databases, since users might combine data from multiple databases and the same ID might correspond to different taxa in different databases. For example, “180092” is the ID for Homo sapiens in the Integrated Taxonomic Information System, but is the ID for Acianthera teres (an orchid) in the NCBI taxonomy database. The tree structure of the taxonomy class uses less memory than the same information saved as a table of ranks by taxa, since the information for each taxon occurs in only one instance. It also does not require explicit rank information (e.g. “genus” or “family”).\n\nThe taxmap class. The taxmap class inherits the taxonomy class and is used to store any number of data sets associated with taxa in a taxonomy (Figure 1). A list called “data” stores any number of lists, tables, or vectors that are mapped to all or a subset of the taxa at any rank in the taxonomy. In the case of tables, the presence of a “taxon_id” column containing unique taxon IDs indicates which rows correspond to which taxa. Lists and vectors can be named by taxon IDs to indicate which taxa their elements correspond to. When a taxmap object is subset or otherwise manipulated, these IDs allow for the taxonomy and associated data to remain in sync. The taxmap also contains a list called “funcs” that stores functions that return information based on the content of the taxmap object. In most functions that operate on taxmap objects, the results of built-in functions (e.g. n_obs), user-defined functions, and the user-defined content of lists, vectors, or columns of tables can be referenced as if they are variables on their own, using non-standard evaluation (NSE). Any value returned by the all_names function can be used in this way. This greatly reduces the amount of typing needed and makes the code easier to read.\n\nManipulation functions. The hierarchy, hierarchies, and taxa classes have a relatively simple structure that is easily manipulated using standard indexing (i.e. using [, [[, or $), but the taxonomy and taxmap classes are hierarchical, making them much harder to modify for the average user. To make manipulating these classes easier, we have developed a set of functions based on the dplyr data manipulation philosophy. The dplyr framework provides a consistent, intuitive, and chain-able set of commands that is easier for new users to understand than equivalent base R commands, which have accumulated some idiosyncrasies over the last 40 years. For example, filter_taxa and filter_obs are analogs of the dplyr filter function used to subset tables.\n\nOne aspect that makes dplyr convenient is the use of NSE to allow users to refer to column names as if they are variables on their own. The taxa package builds on this idea. Since taxmap objects can store any number of user-defined tables, vectors, lists, and functions, the values accessible by NSE are more diverse. All columns from any table and the contents of lists/vectors are available. There are also built-in and user-defined functions whose results are available via NSE. Referring to the name of the function as if it were an independent variable will run the function and return its results. This is useful for data that is dependent on the characteristics of other data and allows for convenient use of the magrittr %>% piping operator. For example, the built-in n_subtaxa function returns the number of subtaxa for each taxon. If this was run once and the result was stored in a static column, it would have to be updated each time taxa are filtered. If there are multiple filtering steps piped together using %>%, a static “n_subtaxa” column would have to be recalculated after each filtering to keep it up to date. Using a function that is automatically called when needed eliminates this hassle. The user still has the option of using a static column if it is preferable to avoid redundant calculations with large data sets.\n\nUnlike dplyr’s filter function, filter_taxa works on a hierarchical structure and, optionally, on associated data simultaneously. By default, the hierarchical nature of the data is not considered; taxa that meet some criterion are preserved regardless of their place in the hierarchy. When the subtaxa option is TRUE, all of the subtaxa of taxa that pass the filter are also preserved and when supertaxa is TRUE, all of the supertaxa are likewise preserved. For example,\n\n\n\nwould remove any taxa that are not named “Fungi” or are not a subtaxon of a taxon named “Fungi”. By default, steps are taken to ensure that the hierarchy remains intact when taxa are removed and that user-defined data are remapped to remaining taxa. When the reassign_taxa option is TRUE (the default), the subtaxa of removed taxa are reassigned to any supertaxa that were not removed, keeping the tree intact. When the reassign_obs option is TRUE (the default), any user-defined data assigned to removed taxa are reassigned to the closest supertaxa that passed the filter. This makes it easy to remove levels of the taxonomy without losing associated information. Finally, if the drop_obs option is TRUE (the default), any user-defined data assigned to removed taxa are also removed, allowing for subsetting of user-defined data based on taxon characteristics. The many combinations of these powerful options make filter_taxa a flexible tool and make it easier for new users to deal with the hierarchical nature of taxonomic data. The function sample_n_taxa is a wrapper for filter_taxa that randomly samples some number of taxa. All of the options of filter_taxa can also be used for sample_n_taxa, in addition to options that influence the relative probability of each taxon being sampled.\n\nOther dplyr analogs that help users manipulate their data include filter_obs, sample_n_obs, and mutate_obs, filter_obs is similar to running the dplyr function filter on a tabular, user-defined dataset, except that there are more values available to NSE and lists and vectors can also be subset. The drop_taxa option can be used to remove any taxa whose only observations have been removed during the filtering. The sample_n_obs function is a wrapper for filter_obs that randomly samples some number of observations. Like sample_n_taxa, there are options to weight the relative probability that each observation will be sampled. The mutate_obs function simply adds columns to tables of user-defined data.\n\nMapping functions. There are also a few functions that create mappings between different parts of the data contained in taxmap or taxonomy objects. These are heavily used internally in the functions described already, but are also useful for the user. The subtaxa and supertaxa functions return the taxon IDs (or other values) associated with all subtaxa or supertaxa of each taxon. They return one value per taxon. The recursive option controls how many ranks below or above each taxon are traversed. For example, subtaxa(obj, recursive = 3) will return information for all subtaxa and their immediate subtaxa for each taxon. The recursive option also accepts a simple TRUE/FALSE, with TRUE indicating all subtaxa of subtaxa, etc., and FALSE only returning immediate subtaxa, but not their descendants. By default, subtaxa and supertaxa return taxon IDs, but the value option allows the user to choose what information to return for each taxon. For example, subtaxa(obj, value = \"taxon_names\") will return the names of taxa instead of their IDs. Any data available to NSE (i.e. in the result of all_names(obj)) can be returned in this way.\n\nThe functions roots, stems, branches, and leaves are a conceptual set of functions that return different subsets of a taxonomy. A “root” is any taxon that does not have a supertaxon. A “stem” is a root plus all subtaxa before the first split in the tree. A “branch” is any taxon that has only one subtaxon and one supertaxon. Stems and branches are useful to identify since they can be removed without losing information on the relative relationship among the remaining taxa. “Leaves” are taxa with no subtaxa. By default, these options return taxon IDs, but also have the value option like subtaxa and supertaxa, so they can return other information as well. For example, leaves(obj, value = \"taxon_names\") will return the names of taxa on the tips of the tree.\n\nIn the case of taxmap objects, the obs function returns information for observations associated with each taxon and its subtaxa. The observations could be rows in a table or elements in a list/vector that are named by taxon IDs. This is used to easily map between user-supplied information and taxa. For example, assuming a taxonomy with a single root, the value returned by obs for the root taxon will contain information for all observations, since they will all be assigned to a subtaxon of the root taxon. By default, row/element indices of observations will be returned, but the obs function also accepts the value option, so the contents of any column or other information associated with taxa can be returned as well.\n\nThe parsers. Taxonomic data appear in many different forms depending on the source of the data, making parsing a challenge for many users. There are two main sources of variation in how taxonomic data are typically stored: the type of information supplied (e.g. a taxon name vs. a taxon ID) and how it is encoded (e.g. in a table vs. as part of a string). In addition, there might be additional user-specific data associated with the taxa that need to be parsed. These data might be associated with each taxon in a classification (e.g the taxon ranks) or might be associated with each classification (e.g. a sequence ID). In many cases, both types are present. This complexity makes implementing a generic parser for all types of taxonomic data difficult, so parsers are typically only available for specific formats. The taxa package introduces a set of three parsing functions that can parse the vast majority of taxonomic data as well as any associated data and return a taxmap object.\n\nThe parse_tax_data function is used to parse taxonomic classifications stored as vectors in tables that have already been read into R. In the case of tables, the classification can be spread over multiple columns or in a single column with character separators (e.g. “Primates;Hominidae;Homo;sapiens”) or a combination of the two. Other columns are preserved in the output and the rows are mapped to the taxon IDs (e.g. the ID assigned to “sapiens” in the above example). For both tables and vectors, additional lists, vectors or tables can be included and are assigned taxon IDs based on some shared attribute with the source of the taxonomic data (e.g. a shared element ID or the same order). This makes it possible to parse many data sets at once and have them all mapped to the same taxonomy in the resultant taxmap object. Data associated with each taxon in each classification can also be parsed and included in the output using regular expressions with capture groups identifying the information to be stored and a key corresponding to the capture groups that identifies what each piece of information is. For example, Hominidae_f_2;Homo_g_3;sapiens_s_4 would use the sep \";\", the regular expression \"(.+)_(.+)_(.+)\", and the key c(my_taxon = \"taxon_name\", my_rank = \"info\", my_id = \"info\"). The values of the key indicate what the information is (a taxon name and two arbitrary pieces of information) and the names of the key (e.g. “my_rank”) determine the names of columns in the output.\n\nIf only a taxon name (e.g. “Primates”) or a taxon ID for a reference database (e.g. the NCBI taxon ID for Homo sapiens is “180092”) is available in a table or vector, then the classification information must be queried from online databases and the function lookup_tax_data is used. lookup_tax_data has all the same functionality of parse_taxa_data in addition to being able to look up taxonomic classifications associated with taxon names, taxon IDs, and NCBI sequence IDs. If the data are embedded in a string (e.g. a FASTA header), then the function extract_tax_data is used instead. extract_tax_data has the functionality of parse_tax_data and lookup_tax_data, except that the information is extracted from raw strings using a regular expression and a corresponding key, the same way that data for each taxon in a classification is extracted by parse_tax_data. Together, these three parsing functions can handle every combination of data type and format (Figure 2).\n\nThe rows correspond to the common sources of taxonomic information: full taxonomic classifications encoded in text, taxon IDs from a database, taxon names (a single rank), and NCBI sequence IDs. The columns correspond to the different formats the information can be encoded in: as a simple vector, as columns in a table, and as a piece of a complex string (e.g. a FASTA header). In the case of tables and complex strings, other information associated with the taxa can be preserved in the parsed result, as is done in the “use cases” example below. Any one cell in the table shows how to parse a given taxonomic information source in a given format using one of the three parsing functions: parse_tax_data, lookup_tax_data, extract_tax_data.\n\nTaxa is an R package hosted on CRAN, so only an R installation and internet connection are needed to install and use taxa. Once installed, most of the functionality of the package can be used without an internet connection. R can be installed on nearly any operating system, including most UNIX systems, MacOS, and Windows. The minimum system requirements of R and the taxa package are easily met by most personal computers. The amount of resources needed will depend on the size of data being used and the complexity of analyses being conducted. The package can be installed by entering install.packages(\"taxa\") in an interactive R session. The development version can be installed from GitHub using the devtools package:\n\n\n\nFor users, the typical operation of the software will involve parsing some kind of input data into a taxmap object using a method demonstrated in Figure 2. Alternatively, a dependent package, such as metacoder, might provide a parser that wraps one of the taxa parsers or otherwise returns a taxmap object. Once the data is in a taxmap object, the majority of a user’s interaction with the taxa package would typically involve filtering and manipulating the data using functions described in Table 1 and applying application-specific functions in other packages, such as metacoder (Figure 3).\n\nRecords of plant species occurrences in Oregon are downloaded from the Global Biodiversity Information Facility (GBIF) using the rgbif package (Chamberlain, 2017). Then a taxa parser is used to parse the table of GBIF data into a taxmap object. A series of filters are then applied. First, all occurrences that are not from preserved specimens as well any taxa that have no occurrences from preserved specimens are removed. Then, all taxa at the species level are removed, but their occurrences are reassigned to the genus level. All taxa without names are then removed. In the final two filters, only orders within Tracheophyta with greater than 10 subtaxa are preserved. The metacoder package is then used to create a heat tree (i.e. taxonomic tree) with color and size used to display the number of occurrences associated with each taxon at each level of the hierarchy.\n\n\nUse cases\n\nTaxa is currently being used by metacoder and we are working on refactoring parts of taxize to work seamlessly with taxa as well. Both taxize and metacoder provide broadly useful functions such as querying databases with taxonomic information and plotting taxonomic information, respectively. We hope that having these two packages adopt the taxa framework will encourage developers of new packages to do so as well. Regardless, the flexible parsers implemented in taxa (Figure 2) allow for data from nearly any source to be used. The example analysis below uses data from the package rgbif (Chamberlain, 2017; Chamberlain & Boettiger, 2017), even though rgbif was not designed to work with taxa. This example shows a few of the benefits of using taxa. The function occ_data from the rgbif package returns a data.frame (i.e. table) of occurrence data for species from the Global Biodiversity Information Facility (GBIF) with one row per occurrence. The table has one column per taxonomic rank from kingdom to species.\n\n\n\nThis format returned by rgbif::occ_data is a variant on the format described in Figure 2, row 1, column 2, except that there is only one rank per column instead of all ranks being concatenated in the same column (the parser accepts any number of columns, each of which could contain multiple ranks delineated by a separator).\n\n\n\nIn the taxmap object returned by parse_tax_data, the original table returned by occ_data is stored as obj$data$tax_data, but an extra column with taxon IDs for each row is prepended.\n\n\n\nThe data are then passed through a series of filters piped together. The filter_obs command removes rows from the occurrence data table not corresponding to preserved specimens, as well as any corresponding taxa that no longer have occurrences due to this filtering. The multiple calls to filter_taxa that follow demonstrate some of the different parameterizations of this powerful function. By default, taxa that don’t pass the filter are simply removed and any occurrences assigned to them are reassigned to supertaxa that did pass the filter (e.g. occurrences for a deleted species would be assigned to the species’ genus). When the supertaxa option is set to TRUE, all the supertaxa of taxa that pass the filter will also be preserved. The subtaxa option works the same way. Finally, the filtered data are passed to a plotting function from the metacoder package that accepts the taxmap format. The plot is a taxonomic tree with color and size used to display the number of occurrences associated with each taxon (Figure 3).\n\n\n\nNote the use of columns in the original input table like basisOfRecord being used as if they were independent variables. This is implemented by NSE as a convenience to users, but they could also have been included by typing the full path to the variable (e.g. obj$data$tax_data$basisOfRecord or occ$data$basisOfRecord). This is similar to the use of taxon_ranks and taxon_names, which are actually functions included in the class (e.g. obj$taxon_ranks()). The benefit of using NSE is that they are reevaluated each time their name is referenced. This means that the first time taxon_ranks is referenced in the example code it returns a different value than the second time it is referenced, because some taxa were filtered out. If obj$taxon_ranks() is used instead, it would fail on the second call because it would return information for taxa that have been filtered out already.\n\n\nConclusions\n\nWhile taxa is useful on its own, its full potential will be realized after being adopted by the community as a standard for interacting with taxonomic information in R. A robust standard for the commonplace problems of data parsing and manipulation will free developers to focus on specific novel functionality. The taxa package already serves as the foundation of another package called metacoder, which provides functions for plotting taxonomic information and parsing common file formats used in metagenomics research. Taxize, the primary package for querying taxonomic information from internet sources, is also being refactored to be compatible with taxa. We hope the broadly useful functionality of these two packages will jump start adoption of taxa as the standard for taxonomic data manipulation in R.\n\n\nSoftware availability\n\nInstall in R as install.packages(\"taxa\")\n\nSoftware available from: https://cran.r-project.org/web/packages/taxa/index.html\n\nSource code available from: https://github.com/ropensci/taxa\n\nArchived source code available from: https://doi.org/10.5281/zenodo.1183667 (Foster et al., 2017)\n\nLicense: MIT",
"appendix": "Competing interests\n\n\n\nThe authors have declared that no competing interests exist. The use of trade, firm, or corporation names in this publication is for the information and convenience of the reader. Such use does not constitute an official endorsement or approval by the United States Department of Agriculture or the Agricultural Research Service of any product or service to the exclusion of others that may be suitable.\n\n\nGrant information\n\nThis work was supported in part by funds from USDA Agricultural Research Service Projects 2027-22000-039-00 and 2072-22000-039-15-S to NG.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nChamberlain S: rgbif: Interface to the Global ‘Biodiversity’ Information Facility ‘API’. R package version 0.9.8. 2017. Reference Source\n\nChamberlain SA, Boettiger C: R Python, and Ruby clients for GBIF species occurrence data. PeerJ Preprints. 2017; 5: e3304v1. Publisher Full Text\n\nChamberlain SA, Szöcs E: taxize: taxonomic search and retrieval in R [Version 1; Referees: 3 Approved]. F1000Res. 2013; 2: 191. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGentleman RC, Carey VJ, Bates DM, et al.: Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10): R80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFoster Z, Chamberlain S, Grunwald N: taxa v0.2.0 (Version 0.2.0). Zenodo. 2017. Publisher Full Text\n\nFoster ZS, Sharpton TJ, Grünwald NJ: Metacoder: An R package for visualization and manipulation of community taxonomic diversity data. PLoS Comput Biol. 2017; 13(2): e1005404. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcMurdie PJ, Holmes S: phyloseq: an R package for reproducible interactive analysis and graphics of microbiome census data. PLoS One. 2013; 8(4): e61217. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOksanen J, Blanchet FG, Kindt R, et al.: Package ‘Vegan’. Community Ecology Package, Version. 2013; 2(9).\n\nTippmann S: Programming Tools: Adventures with R. Nature. 2015; 517(7532): 109–10. PubMed Abstract | Publisher Full Text\n\nWickham H, Francois R: “Dplyr: A Grammar of Data Manipulation”. R Package Version 0.4. 2015; 1: 20."
}
|
[
{
"id": "31494",
"date": "26 Mar 2018",
"name": "Ethan P. White",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe software described in this paper provides useful tools for working with taxonomic data in R by providing a standard approach for storing and manipulating this hierarchically structured data. Taxonomic data is prevalent in many biological disciplines. As result, this package fills an important niche and has the potential to become widely used by other packages dealing with biological data.\nThe software itself follows good development practices including modularization, documentation, version control, and automated testing. The package is available through CRAN – the main repository for R packages. Both the CRAN release and the development version of the package install smoothly. The use case examples given in the paper all run as expected on the development version, but they include functionality that is not present in the most recent release. This means that readers of the paper who have not installed the development version will encounter issues with the examples. We recommend a new minor version release so that the existing functionality is reflected in the latest release. Alternatively in-development functionality could be removed from the examples in the paper.\nThe paper does a nice job of motivating the need for the package and the use case section nicely demonstrates some of the core functionality. However, there are improvements that could be made to help the paper communicate with a broader audience. Specifically we recommend changes to the Introduction and Methods sections.\nIn the Introduction we suggest either expanding the second paragraph or adding a new paragraph to describe the other kinds of datasets that this will be helpful for. There are many large and small ecological and evolutionary datasets beyond high-throughput sequencing that involve lots of taxonomic data (e.g., museum records, citizen science projects, compilations of literature data) and broadening the context will help more readers understand why this package might be useful to them. We also suggest adding an additional paragraph, following the third paragraph, that describes typical taxonomic data, including an example, and that mentions the specific challenges of this kind of hierarchical data. This will help readers less familiar with these issues understand the value of the software and help set up the technical details in the last paragraph of the Introduction. To make room for these additions we suggest removing the first paragraph, which currently states that R is “becoming the leading tool for scientific data analysis in academic research.” This specific interpretation isn’t justified by the associated citation and it is broadly understood that R is an important language so a paragraph explaining this isn’t really necessary.\nIn the Methods we suggest moving the parsing section to the beginning, and using the examples from that section throughout the descriptions of classes. This will help ground the descriptions of the classes and how they are related, which currently reads as somewhat abstract. The current second paragraph (“The hierarchy and taxonomy class”) would benefit from having the hierarchy class defined more and the differences between the hierarchy and taxonomy classes clarified. For example, it is stated later that the hierarchy class is simpler and the taxonomy class is more hierarchical; it would be helpful to include this information earlier. Moving the last two sentences of this paragraph to the beginning might address this issue. The taxon IDs information could be its own paragraph, starting with “Using automatically generated taxon IDs”. The examples in that section are really helpful. In the beginning of the third paragraph of the methods (“The taxmap class”), it would be helpful to emphasize that this class combines the rest of the original data (including an example of original data, e.g., mass) back with the taxon class. Finally, a figure of an example taxonomic hierarchy that illustrates the operation of the filtering, mapping, and roots/stems/branches/leaves functions would be useful.\nMinor suggestions\nDefine the following phrases to broaden communication\n“character vectors”, first paragraph of methods “custom print method”, first paragraph of methods “non-standard evaluation”, third paragraph of methods “parsing”, paragraph 11 of methods\n\nCite Wickham & Francois (2015) for the dplyr philosophy in the fourth Methods paragraph\n\nConsider color coding the boxes in Fig. 1 to match the three classes paragraphs\n\nDefine R6 and S3 in the Fig. 1 legend\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3941",
"date": "11 Sep 2018",
"name": "Niklaus Grunwald",
"role": "Author Response",
"response": "Thank very much for your detailed, constructive review that much improved this manuscript. We addressed all your comments as follows: ‘The use case examples given in the paper all run as expected on the development version, but they include functionality that is not present in the most recent release. This means that readers of the paper who have not installed the development version will encounter issues with the examples. We recommend a new minor version release so that the existing functionality is reflected in the latest release.” Thanks for catching this! We have released a version on CRAN with this functionality. “In the Introduction we suggest either expanding the second paragraph or adding a new paragraph to describe the other kinds of datasets that this will be helpful for. There are many large and small ecological and evolutionary datasets beyond high-throughput sequencing that involve lots of taxonomic data (e.g., museum records, citizen science projects, compilations of literature data) and broadening the context will help more readers understand why this package might be useful to them” Good idea! We added a section on this. “We also suggest adding an additional paragraph, following the third paragraph, that describes typical taxonomic data, including an example, and that mentions the specific challenges of this kind of hierarchical data. This will help readers less familiar with these issues understand the value of the software and help set up the technical details in the last paragraph of the Introduction” Ok, good idea, we added some examples of diverse formats that we have used. “To make room for these additions we suggest removing the first paragraph, which currently states that R is “becoming the leading tool for scientific data analysis in academic research.” This specific interpretation isn’t justified by the associated citation and it is broadly understood that R is an important language so a paragraph explaining this isn’t really necessary.” Yes, we had a similar comment from another reviewer, so we removed part of this. “In the Methods we suggest moving the parsing section to the beginning, and using the examples from that section throughout the descriptions of classes. This will help ground the descriptions of the classes and how they are related, which currently reads as somewhat abstract.” The parsers only returns `taxmap` objects so far, and `taxmap` is built upon the previous classes, so that is why we ordered it that way. However, we agree that it is not immediately clear what the importance of the first classes described are. “The current second paragraph (“The hierarchy and taxonomy class”) would benefit from having the hierarchy class defined more and the differences between the hierarchy and taxonomy classes clarified. “ Agreed, we will clarify this. “In the beginning of the third paragraph of the methods (“The taxmap class”), it would be helpful to emphasize that this class combines the rest of the original data (including an example of original data, e.g., mass) back with the taxon class. “ Good point. This is a key aspect of the class. “Finally, a figure of an example taxonomic hierarchy that illustrates the operation of the filtering, mapping, and roots/stems/branches/leaves functions would be useful.” We like this idea. This is something we have considered in the past. We will add this to one of our vignettes in the future: https://github.com/ropensci/taxa/issues/170 “Define the following phrases to broaden communication “character vectors”, first paragraph of methods “custom print method”, first paragraph of methods “non-standard evaluation”, third paragraph of methods “parsing”, paragraph 11 of methods Agreed, we made those changes. “Cite Wickham & Francois (2015) for the dplyr philosophy in the fourth Methods paragraph” We cited it in the introduction. “Consider color coding the boxes in Fig. 1 to match the three classes paragraphs” We are not sure what you mean here. “Define R6 and S3 in the Fig. 1 legend” Agreed."
}
]
},
{
"id": "31493",
"date": "26 Mar 2018",
"name": "C. Titus Brown",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present the R package taxa, which provides a set of datatypes and functions for working with taxonomic data. The authors hope that they have contributed a strong base from which the taxonomic data ecosystem can build in R. The authors have also included a particularly useful set of parsers, and dplyr-like functionality within their package. The packages metacoder and rgbif are included as being compatible with taxa, and the authors mention that the popular package taxize is being refactored for compatibility.\n\nMajor points\nThere is no discussion of the limitations of the software, or an specific discussion of incompatibility issues. If the authors have never encountered incompatibility issues, it would be helpful if they stated other packages or formats for which they have not encountered issues with. The introduction does not provide a concrete discussion of the challenges that the package taxa addresses.\n\nMinor points\nIn paragraph one, the authors note the ease with which one can develop an R package. I recommend adding \"relative\" somewhere in there. In paragraph two, it's not clear what is meant by \"each package encodes this information differently.\" In paragraph four, \"Complexity ranges from simple,\" \"simple\" is perhaps not the right word In paragraph five, \"However, using these classes allows for greater flexibility and rigor as the package develops,\" it is not clear what is meant by \"the package.\" In paragraph six, \"(e.g. Animalia, Chordata, Mammalia, Primates, Hominidae, Homo, sapiens)\" and “Achlya” should be italicized. I paragraph eight, \"for the average user\" should be removed. The clause, \"that is easier for new users to understand than equivalent base R commands, which have accumulated some idiosyncrasies over the last 40 years\" should also be rephrased to celebrate dplyr without cutting down base R. In paragraph 10, \"The many combinations of these powerful options make filter_taxa a flexible tool and make it easier for new users to deal with the hierarchical nature of taxonomic data,\" \"make\" should be \"makes.\" In paragraph 11, the sentence \"Other dplyr analogs that help users manipulate their data include filter_obs, sample_n_obs, and mutate_obs, filter_obs is similar to running the dplyr function filter on a tabular, user-defined dataset, except that there are more values available to NSE and lists and vectors can also be subset,\" is confusing. In paragraph 15, sentence 1, \"for many users\" should be removed. In paragraph 16, “Primates;Hominidae;Homo;sapiens,” “sapiens,” and \"Primates\" should be italicized In paragraph 17, \"Together, these three parsing functions can handle every combination of data type and format (Figure 2),\" every is a strong assertion.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": [
{
"c_id": "3940",
"date": "11 Sep 2018",
"name": "Niklaus Grunwald",
"role": "Author Response",
"response": "Thank very much for your detailed, constructive review that much improved this manuscript. We addressed all your comments as follows: “There is no discussion of the limitations of the software, or an specific discussion of incompatibility issues. If the authors have never encountered incompatibility issues, it would be helpful if they stated other packages or formats for which they have not encountered issues with.” We think adding some information of limitations of the software is a good idea, but we are not sure what you had in mind exactly. In regards to limitations on data set size and speed, we have not explored this systematically yet, although we plan to identify parts of the code to port to C++ to increase speed where needed. We are also not sure what you mean specifically by “incompatibility issues”. It is certainly true that few packages are designed to work seamlessly with `taxa` currently, but `taxa` was designed with this in mind and the parsers can be used to import data from other formats not designed for use with `taxa`, as was done in the use case. We added some examples of packages we have used with `taxa` to demonstrate compatibility and will mention if we find any that are not compatible in some way. In regards to formats, in our own work and when helping others, we have used the taxa parsers with numerous ( > 20 maybe) different formats of taxonomic data and have never encountered a raw-text-based format that the current parsers cannot handle, but of course there might be some cases we have have not encountered/considered. We like the idea of adding a list of formats for which we have used taxa to read. We added this to the paper. “The introduction does not provide a concrete discussion of the challenges that the package taxa addresses.” We do mention the lack of a standard set of classes for packages to build on, which is the main challenge `taxa` is trying to address, and we added some more background on data parsing and manipulation, which are the other goals taxa` tries to address. “In paragraph one, the authors note the ease with which one can develop an R package. I recommend adding \"relative\" somewhere in there.” Good point! We did that. We remember it did not appear easy when we started. “In paragraph two, it's not clear what is meant by \"each package encodes this information differently.”” Ok, we added some examples. “In paragraph four, \"Complexity ranges from simple,\" \"simple\" is perhaps not the right word” Agreed. The low-level classes are quite simple currently, little more than containers for a few variables, but we removed the word “simple”, since it is not all that descriptive anyway. “In paragraph five, \"However, using these classes allows for greater flexibility and rigor as the package develops,\" it is not clear what is meant by \"the package.\"” We meant `taxa`. We reworded that, thanks! “In paragraph six, \"(e.g. Animalia, Chordata, Mammalia, Primates, Hominidae, Homo, sapiens)\" and “Achlya” should be italicized.” Agreed. Done. “I paragraph eight, \"for the average user\" should be removed. The clause, \"that is easier for new users to understand than equivalent base R commands, which have accumulated some idiosyncrasies over the last 40 years\" should also be rephrased to celebrate dplyr without cutting down base R.” Agreed, we made those changes. We did not mean to berate base R, but rather point out a lack of consistency relative to dplyr, but we can see how people could get that impression. “In paragraph 10, \"The many combinations of these powerful options make filter_taxa a flexible tool and make it easier for new users to deal with the hierarchical nature of taxonomic data,\" \"make\" should be \"makes.\"” “The many combinations” is plural, so we think it should be “make”. “In paragraph 11, the sentence \"Other dplyr analogs that help users manipulate their data include filter_obs, sample_n_obs, and mutate_obs, filter_obs is similar to running the dplyr function filter on a tabular, user-defined dataset, except that there are more values available to NSE and lists and vectors can also be subset,\" is confusing.” Thanks for catching that! That comma between “mutate_obs” and “filter_obs” was supposed to be a period, which we think makes it significantly less confusing. “In paragraph 15, sentence 1, \"for many users\" should be removed.” Ok, we did that. “In paragraph 16, “Primates;Hominidae;Homo;sapiens,” “sapiens,” and \"Primates\" should be italicized” Agreed. “In paragraph 17, \"Together, these three parsing functions can handle every combination of data type and format (Figure 2),\" every is a strong assertion.” We meant every combination of data type and format covered in the preceding paragraphs and the figure, and we clarified that further."
}
]
},
{
"id": "32711",
"date": "04 Apr 2018",
"name": "Damiano Oldoni",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is well written. Its structure is clear and the goals are well defined. Here below some minor issues.\nGeneral Issue\nWhat is the reason taxa functionalities are not implemented in taxize, which already seems a general purpose package to work with taxonomic information?\n\nIntroduction\nWe found citation numbers for R to be very low. Does “its extensions” include all packages? Overall, it would be better to drop the sentence with citation numbers and keep it to R + easy development of packages, thus going fast to paragraph 2 which is more important to provide context for the taxa package “Database” is a very generic (technical) term. Would have expected “source”, or similar for source of taxonomic information, cf. http://dublincore.org/documents/dcmi-terms/#elements-source\n\nMethods\nImplementation\nFigure 1: we found ourselves drawing examples of the classes presented in figure 1. Would maybe be useful to add those to figure? If it is not possible for graphic issues, maybe could be useful to add them in the text, more or less as done in vignette of the package. In “manipulation functions” : “Finally, if the drop_obs option is TRUE (the default), any user-defined data assigned to removed taxa are also removed, ...” With the reassign_taxa and reassign_obs discussed above, it wasn’t immediately clear how taxa can be removed. Maybe update to “... data assigned to removed taxa (those without supertaxa matching the criteria) are also removed ...”\n\nUse Cases\nUse cases: one use case presented. Update title to “Use case”. The presented use case is very informative, no need to add more use cases Use case might have been stronger if taxonomic information from 2 sources was combined (e.g. GBIF and …)\n\nHere below some minor issues about the package:\nConsider moving CONDUCT.md to .github directory, as that directory is already used for CONTRIBUTING.md Add proper MIT License in LICENSE file README.md is now a combination of https://github.com/ropensci/taxa/blob/master/vignettes/taxa-introduction.Rmd and https://github.com/ropensci/taxa/blob/master/README.Rmd. Would keep README shorter (based on README.Rmd), with links to vignettes instead. Consider adding a pkgdown website, with references for functions + the two vignettes. Site can be build in docs/ folder and hosted on GitHub pages, cf. https://inbo.github.io/wateRinfo/ Add website to repo description in repo settings. Consider moving vignette figures to “vignettes/figures” subdirectory for clarity\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3939",
"date": "11 Sep 2018",
"name": "Niklaus Grunwald",
"role": "Author Response",
"response": "Thank very much for your detailed, constructive review that much improved this manuscript. We addressed all your comments as follows: “What is the reason taxa functionalities are not implemented in taxize, which already seems a general purpose package to work with taxonomic information?” The main focus of taxize is to download taxonomic information from online databases. Since there are many sources of taxonomic data, taxize is already a large and complicated package. The R community provides restrictions on the size of packages. Also, not all applications where a class system for taxonomic data is needed require the ability to download taxonomic information. “We found citation numbers for R to be very low. Does “its extensions” include all packages? Overall, it would be better to drop the sentence with citation numbers and keep it to R + easy development of packages, thus going fast to paragraph 2 which is more important to provide context for the taxa package” It should include all packages. It does not seem low to us, considering that many people do not cite software, but we see your point. We removed it. ““Database” is a very generic (technical) term. Would have expected “source”, or similar for source of taxonomic information, cf. http://dublincore.org/documents/dcmi-terms/#elements-source” We agree that “Source” would be a good term, although we hesitate to change the code at this point. We would have to rename the `taxon_database` to `taxon_source` (it is shorter, which is nice) and many option names that have some reference to “database”. Changing our references to “database” to “source” in the paper is easy enough, but then the different words used for the same thing in the paper and the code might confuse some people. “Figure 1: we found ourselves drawing examples of the classes presented in figure 1. Would maybe be useful to add those to figure? If it is not possible for graphic issues, maybe could be useful to add them in the text, more or less as done in vignette of the package.” Good idea. We assume you are talking about the classes’ print methods. Some would fit well in Figure 1, but the `taxmap` and `taxonomy` print methods would be too big. We added some examples to the body of the paper. “In “manipulation functions” : “Finally, if the drop_obs option is TRUE (the default), any user-defined data assigned to removed taxa are also removed, ...” With the reassign_taxa and reassign_obs discussed above, it wasn’t immediately clear how taxa can be removed. Maybe update to “... data assigned to removed taxa (those without supertaxa matching the criteria) are also removed ...” Yes, we see why that is confusing; thanks for the suggestion. Observations are only removed if they cannot be reassigned to something else. That could happen when “reassign_obs” is FALSE or there are no taxa left they could be reassigned to (as you say). We added some clarification about this. “Use cases: one use case presented. Update title to “Use case”. The presented use case is very informative, no need to add more use cases” Good point, thanks! “Use case might have been stronger if taxonomic information from 2 sources was combined (e.g. GBIF and …)” We like that idea, but we can't think of a way to do it that would keep the example simple. We could look up the taxonomic hierarchy from NCBI or ITIS using the species binomial, but it would be an odd thing to do when the full classification is already available, so it might make the example confusing. “Consider moving CONDUCT.md to .github directory, as that directory is already used for CONTRIBUTING.md” Done, see: https://github.com/ropensci/taxa/issues/149 “Add proper MIT License in LICENSE file” To conform with CRAN guidelines we could not do this. See: https://github.com/ropensci/taxa/issues/150 “README.md is now a combination of https://github.com/ropensci/taxa/blob/master/vignettes/taxa-introduction.Rmd and https://github.com/ropensci/taxa/blob/master/README.Rmd. Would keep README shorter (based on README.Rmd), with links to vignettes instead.” and “Consider adding a pkgdown website, with references for functions + the two vignettes. Site can be build in docs/ folder and hosted on GitHub pages, cf. https://inbo.github.io/wateRinfo/” Good idea! We would like to add a website, but we will probably wait until we have one or two more vignettes done (the second is still being worked on). Any additional vignettes will be added with links. We would be ok with reducing the readme once we have a website up with documentation up that we can link to. https://github.com/ropensci/taxa/issues/151 “Consider moving vignette figures to “vignettes/figures” subdirectory for clarity” We will do so and have pending issue on github: https://github.com/ropensci/taxa/issues/152"
}
]
},
{
"id": "31496",
"date": "19 Apr 2018",
"name": "Holly M. Bik",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a framework named \"taxa\", designed to serve as a new standard package for interacting with taxonomy data in R. This package aims to address the ongoing difficulties in dealing with hierarchical taxonomy strings and numerical IDs in R, and I commend the authors on developing an exciting new framework that will simplify the manipulation and filtering of taxonomic data.\nOverall, I thought this was a well written manuscript that did a fairly comprehensive job at explaining the functions and classes within the taxa package. However, I have a few comments that would further clarify the package functionality and inputs, and help make this manuscript accessible to a more general audience of computational biologists and ecologists (e.g. with novice to intermediate knowledge of R).\nCurrently, this manuscript is geared towards a technical audience who are experts in R programming and package development. I would incorporate some more generalized explanations of the taxa package and its purpose (e.g. that assume a novice level of knowledge in R). For example, the use case using GBIF data frames assumes that readers are familiar with the field of biodiversity informatics and the format/information content of GBIF species occurrence data.\n\nWhat is the ideal input file for the taxa package? A basic tab-delimited taxonomy mapping file (e.g. with Accession IDs and taxonomic hierarchies only), a metabarcoding OTU table (e.g. JSON formatted or tab-delimited from QIIME where taxonomy strings are embedded along with study-specific data), or a full database with accessions and associated taxonomic information such as SILVA or NCBI? This package seems like it offers powerful tools for parsing and manipulating taxonomic information but it is not entirely clear what end users could (or should) be using as input files.\n\nIt would be useful to explain how the \"taxa\" package can be integrated and linked to the other ecological R packages. Specific explanations or use cases involving vegan or phyloseq would be useful here. The link to metacoder and taxize is much more clearly laid out, probably due to the fact that the authors also developed these packages.\n\nRelated to the previous point, how would you use taxa as a standalone package? The use case examples presented here make it seem like the \"taxa\" package is much more useful when used in conjunction with metacoder or taxize. However, given the diverse functionality it seems like there are many other (very common) use cases for taxa that are not clearly presented here.\n\nHow does \"taxa\" deal with (or allow manipulation / correction of) taxonomic hierarchies with non-homologous taxonomic levels. For example, a set of input hierarchies where level 4 represents \"Order\" level in Fungi but \"Subclass\" level in protists. This is a very common scenario for metabarcoding datasets - ideally you want to introduce gaps/placeholders for hierarchies that do not contain a certain level, so that users can automatically or manually standardize their taxonomic levels all rows in a dataset (e.g. making Level 7 correspond to \"Family\" level across all taxa).\n\nDoes \"taxa\" (or related packages like taxize) contain any Taxonomic Name Resolution Service (TNRS) functionality? If not, is this planned for future releases?\n\nPage 5, paragraph 3: I found the description of the \"reassign_taxa\" option to be confusing. It was not clear to me what the purpose or result of this reassignment function would be. Clarifying the wording and adding a real world example would be useful here.\n\nTable 1: The description of \"arrange_taxa\" and \"arrange_obs\" is fairly vague. Do these functions rearrange data within a file or object (e.g. sorting or filtering)? If so, what are the options for ordering data (e.g. by abundance, alphabetical sorting, etc.)\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? No\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": [
{
"c_id": "3942",
"date": "11 Sep 2018",
"name": "Niklaus Grunwald",
"role": "Author Response",
"response": "Thank very much for your detailed, constructive review that much improved this manuscript. We addressed all your comments as follows: “Currently, this manuscript is geared towards a technical audience who are experts in R programming and package development. I would incorporate some more generalized explanations of the taxa package and its purpose (e.g. that assume a novice level of knowledge in R). For example, the use case using GBIF data frames assumes that readers are familiar with the field of biodiversity informatics and the format/information content of GBIF species occurrence data.” The paper was written mostly to package developers, but we agree that it would be valuable to make it more accessible. We added some more explanation of technical concepts, including the GBIF data. “What is the ideal input file for the taxa package? A basic tab-delimited taxonomy mapping file (e.g. with Accession IDs and taxonomic hierarchies only), a metabarcoding OTU table (e.g. JSON formatted or tab-delimited from QIIME where taxonomy strings are embedded along with study-specific data), or a full database with accessions and associated taxonomic information such as SILVA or NCBI? This package seems like it offers powerful tools for parsing and manipulating taxonomic information but it is not entirely clear what end users could (or should) be using as input files.” The taxa package has no ideal input format and provides flexibility for many formats, but there are some formats that are much easier to parse than others. Tabular data is usually easy to read, as are delimited taxonomy strings (e.g. taxa separated by ;). The taxa parsers do not read files directly. Instead they parse data already in R, so the different transformations/subsets of data from the same file could be parsed differently. For this reason, taxa does not read JSON/BIOM files from QIIME. Instead, it provides highly abstracted parsers to handle most formats and provides the foundation for more specialized (and easier to use) parsers, like those found in metacoder, which wrap the taxa parsers. There is a parser for JSON/BIOM QIIME files (parse_qiime_biom) in metacoder that uses the taxa parsers internally. The other examples you listed are easily handled by the taxa parsers after reading the file into R using something like read.table. “It would be useful to explain how the \"taxa\" package can be integrated and linked to the other ecological R packages. Specific explanations or use cases involving vegan or phyloseq would be useful here. The link to metacoder and taxize is much more clearly laid out, probably due to the fact that the authors also developed these packages.” `taxa` is primarily intended as a foundation for future packages rather than a way of interacting with existing packages, but it can use the data from other packages in some cases. For data structures from other packages that inherit lists, vectors, or data.frames, the taxa filtering functions should be able to manipulate them correctly as is. For other data structures with one or two dimensions with arbitrary row/column/item names, like vegan’s distance matrix or ape’s DNAbin objects, these can be included as is in the taxmap object’s `data` list, the same way standard data.frames/lists are, although the filtering functions do not currently support these and they will be ignored. We would like to add the ability to natively handle these classes in the future, so that, for example, a DNAbin could be included in a `taxmap` object and filtered using `filter_taxa` the same way a list can be now. This should not be too hard to implement, but we have not gotten to it yet. For complicated objects like phyloseq objects that hold many fields themselves, the best solution would be to convert them to `taxmap` objects, manipulate them with the taxa functions, and convert them back. The conversion should be lossless since the `taxmap` class should be able to store all the information in a phyloseq object. There is a function in metacoder called `parse_phyloseq` to convert a phyloseq object to a taxmap object and this uses the `taxa` parsers internally. “Related to the previous point, how would you use taxa as a standalone package? The use case examples presented here make it seem like the \"taxa\" package is much more useful when used in conjunction with metacoder or taxize. However, given the diverse functionality it seems like there are many other (very common) use cases for taxa that are not clearly presented here.” Good point! Yes, `taxa` can be quite useful on its own. A few examples we can think of include: Looking up taxonomic classifications from sequence IDs, taxon IDs, and taxon names from a variety of databases (using taxize internally). Subsetting data to a specific taxon Removing ranks or specific taxa from classification strings Combining taxonomic data from multiple sources into the same taxonomy Getting lists of all the subtaxa/supertaxa for each taxon or other data associated with all of the subtaxa/supertaxa. We added a few of these examples to the paper. “How does \"taxa\" deal with (or allow manipulation / correction of) taxonomic hierarchies with non-homologous taxonomic levels. For example, a set of input hierarchies where level 4 represents \"Order\" level in Fungi but \"Subclass\" level in protists. This is a very common scenario for metabarcoding datasets - ideally you want to introduce gaps/placeholders for hierarchies that do not contain a certain level, so that users can automatically or manually standardize their taxonomic levels all rows in a dataset (e.g. making Level 7 correspond to \"Family\" level across all taxa).” In the `taxonomy` and `taxmap` classes, the taxonomy is stored as a tree structure, not a table, so rank information is not needed, although it is supported when present. The absence of a rank has no placeholder, it simply does not exist in the tree. If the user wanted to subset the tree to only taxa of a specific set of ranks, they could do something like `filter_taxa(obj, taxon_ranks %in% c(“family”, “genus”, “species”))` and the tree would remain intact, although there would be missing levels in the tree if some tips did not have a “family” supertaxon, for example. There is not currently a way to enforce that each rank exists at a fixed depth/level from the root of the tree, but we could add a function to add placeholder taxa to force that to the case (we added an issue to github at: https://github.com/ropensci/taxa/issues/169). “Does \"taxa\" (or related packages like taxize) contain any Taxonomic Name Resolution Service (TNRS) functionality? If not, is this planned for future releases?” Yes, the parsing functions can optionally preprocess taxon names using the “Global Names Resolver” service via `taxize::gnr_resolve`. You just set the `type` option in `lookup_tax_data` or `extract_tax_data` to `”fuzzy_name”` instead of `”taxon_name”` to make this happen. This was a recent addition so it was not in the paper, but added it now. “Page 5, paragraph 3: I found the description of the \"reassign_taxa\" option to be confusing. It was not clear to me what the purpose or result of this reassignment function would be. Clarifying the wording and adding a real world example would be useful here.” Ok, we added some more explanation. The basic idea is that if you remove a taxon in the middle of the tree (say a family), it will assign any genera below that family to the order the family was in if reassign_taxa is set to TRUE (the default). “Table 1: The description of \"arrange_taxa\" and \"arrange_obs\" is fairly vague. Do these functions rearrange data within a file or object (e.g. sorting or filtering)? If so, what are the options for ordering data (e.g. by abundance, alphabetical sorting, etc.)” \"arrange_taxa\" sorts the order of the taxa stored in `taxonomy` or `taxmap` objects. The order of taxa has little effect on most operations on these objects besides ordering the results of functions that return per-taxon information, like `supertaxa`. \"arrange_obs\" orders data stored in a `taxmap` objects (e.g. the row in an OTU table) based on some characteristics of that data. The options for ordering the data are therefore any piece of information associated with elements in that data set, such as the contents of columns in that data set (e.g. the name of OTUs, the counts of OTUs, etc). We added some more descriptions of this to the paper."
}
]
}
] | 1
|
https://f1000research.com/articles/7-272
|
https://f1000research.com/articles/6-2178/v1
|
27 Dec 17
|
{
"type": "Systematic Review",
"title": "Patent foramen ovale closure versus medical therapy for stroke prevention: A systematic review and meta-analysis of randomized controlled trials",
"authors": [
"Gary Tse",
"William K.K. Wu",
"Mengqi Gong",
"George Bazoukis",
"Wing Tak Wong",
"Sunny Hei Wong",
"Konstantinos Lampropoulos",
"Adrian Baranchuk",
"Lap Ah Tse",
"Yunlong Xia",
"Guangping Li",
"Martin C.S. Wong",
"Yat Sun Chan",
"Nan Mu",
"Mei Dong",
"Tong Liu",
"International Health Informatics Study (IHIS) Network",
"Gary Tse",
"William K.K. Wu",
"Mengqi Gong",
"George Bazoukis",
"Wing Tak Wong",
"Sunny Hei Wong",
"Konstantinos Lampropoulos",
"Adrian Baranchuk",
"Lap Ah Tse",
"Yunlong Xia",
"Guangping Li",
"Martin C.S. Wong",
"Yat Sun Chan",
"Nan Mu"
],
"abstract": "Background: Previous randomized trials on patent foramen ovale (PFO) closure versus medical therapy for stroke prevention were inconclusive. Recently, two new randomized trials and new findings from an extended follow-up of a previous trial have been published on this topic. We conducted a systematic review and meta-analysis of randomized trials comparing PFO closure with medical therapy for stroke prevention. Methods: PubMed and Cochrane Library were searched until 16th September 2017. The following search terms were used for PubMed: \"patent foramen ovale\" AND (stroke OR embolism) and \"randomized\" AND \"Trial\". For Cochrane Library, the following terms were used: \"patent foramen ovale\" AND \"closure\" AND (stroke OR embolism). Results: A total of 91 and 55 entries were retrieved from each database using our search strategy respectively, of which six studies on five trials met the inclusion criteria. This meta-analysis included 1829 patients in the PFO closure arm (mean age: 45.3 years; 54% male) and 1972 patients in the medical therapy arm (mean age: 45.1 years; 51% male). The median follow-up duration was 50 ± 30 months. When compared to medical therapy, PFO closure significantly reduced primary endpoint events with a risk ratio [RR] of 0.60 (95% CI: 0.44-0.83, P < 0.0001; I2: 15%). It also reduced stroke (RR: 0.50, 95% CI: 0.35-0.73, P < 0.0001; I2: 32%) despite increasing the risk of atrial fibrillation/flutter (RR: 1.90, 95% CI: 1.23-2.93, P < 0.01; I2: 43%). However, it did not reduce transient ischemic accident events (0.75; 95% CI: 0.51-1.10, P = 0.14; I2: 0%), all-cause bleeding (RR: 0.89; 95% CI: 0.44-1.78, P = 0.74; I2: 51%) or gastrointestinal complications (RR: 0.92; 95% CI: 0.32-2.70, P = 0.88; I2: 0%). Conclusions: PFO closure significantly reduces risk of stroke when compared to medical treatment and should therefore be considered for stroke prevention in PFO patients.",
"keywords": [
"Patent foramen ovale",
"PFO closure",
"stroke",
"medical therapy"
],
"content": "Introduction\n\nThe association between the presence of a patent foramen ovale (PFO) and cryptogenic stroke has been established by previous case-control studies. However, whether PFO closure is effective in reducing stroke events when compared to medical therapy is controversial. Three randomized trials, CLOSURE I1, PC2 and RESPECT3, were conducted. All of these trials showed numerically fewer events in the primary intention-to-treat analysis, but this did not reach statistical significance. Recently, two trials have focused on this issue. Firstly, the CLOSE trial evaluated PFO closure or anticoagulation against antiplatelet therapy, with a primary endpoint of fatal or non-fatal stroke4. Secondly, the REDUCE trial compared PFO closure to antiplatelet therapy only, with a primary endpoint of ischemic stroke, new ischemic stroke or silent brain infarction, demonstrating significant reductions in these events compared to antiplatelet therapy5. Moreover, long-term data of the RESPECT trial were recently published6. Given these new findings, we conducted a systematic review and meta-analysis of these randomized trials to evaluate the benefits and complication rates in PFO closure versus medical therapy.\n\n\nMethods\n\nThe meta-analysis was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement (PRISMA; a completed checklist can be found in Supplementary File 1). PubMed and Cochrane Library were searched for randomized trials that compared the efficacy in stroke prevention of patent foramen ovale (PFO) closure with that of medical therapy. The following search terms were used for PubMed: \"patent foramen ovale\" AND (stroke OR embolism) and \"randomized\" AND \"Trial\". For Cochrane Library, the following terms were used: \"patent foramen ovale\" AND \"closure\" AND (stroke OR embolism). The search period was from the beginning of the databases through to 16th September 2017, with no language restrictions.\n\nThe following inclusion criteria were applied: i) the design was a randomized trial in humans, ii) the study compared stroke outcomes for PFO closure versus medical therapy. Quality assessment of randomized controlled trials was performed using the Cochrane Risk Assessment Tool (Supplementary Figure 1 and Supplementary Figure 2).\n\nData from the different studies were entered in Microsoft Excel (2016 Version). All publications extracted from the search strategy were assessed for compliance with the inclusion criteria. In this meta-analysis, the extracted data elements consisted of: i) trial name; ii) follow-up duration; iii) quality score; and iv) characteristics of the population, including sample size, sex and age. Two reviewers (GT and MG) independently reviewed each included study and disagreements were resolved by adjudication with input from a third reviewer (TL).\n\nThe number of events for: i) primary endpoint, ii) stroke, iii) transient ischemic attack, iv) all-cause bleeding complications, v) gastrointestinal complications [bleeding, ulceration, ulcer perforation], vi) short-term atrial fibrillation or flutter, viii) long-term atrial fibrillation or flutter, were identified and extracted independently by each reviewer from each trial. The Comprehensive Meta-Analysis Software (Version 2) was used for subsequent meta-analyses and statistical analyses. The event rates (events per patient-year) were used to calculate rate ratios for each study, which were pooled in subsequent meta-analyses.\n\nHeterogeneity across studies was assessed using the I2 statistic from the standard chi-square test, which describes the percentage of the variability in the effect estimates resulting from heterogeneity. I2 > 50% was considered to reflect significant statistical heterogeneity and in such cases the random-effects model was used. To identify the origin of the heterogeneity, sensitivity analysis excluding one study at a time. Funnel plots showing standard errors against the logarithms of the odds ratio were constructed. Begg and Mazumdar rank correlation test was used to test for publication bias and the Egger’s test were used to detect publication bias.\n\n\nResults\n\nA flow diagram detailing the above search terms with inclusion and exclusion criteria is depicted in Figure 1. A total of 91 and 55 studies were retrieved from PubMed and Cochrane Library, respectively. Of these, six studies met our inclusion criteria2–7. These were based on the following trials: CLOSURE I, PC, CLOSE, RESPECT and REDUCE. The original publication on the RESPECT trial was excluded because an update on longer term results was recently published6. Therefore, a total of five studies were included in this meta-analysis1–5. The baseline characteristics of these studies are listed in Supplementary Table 1. This meta-analysis included 1829 patients in the PFO closure arm (mean age: 45.3 years; 54% male; mean follow-up duration 50 ± 30 months) and 1972 patients in the medical therapy arm (mean age: 45.1 years; 51% male; mean follow-up duration 50 ± 31 months). For the meta-analyses, event rates (events per patient-year) were extracted and used to calculate rate ratios. The results of the statistical treatment of this meta-analysis are detailed in Supplementary File 2, with those of sensitivity analyses by the leave-one-out method described in Supplementary Figure 3–Supplementary Figure 8. Funnel plots plotting standard errors against the logarithms of the risk ratios are shown in Supplementary Figure 9–Supplementary Figure 14. In all of the analyses, Begg and Mazumdar rank correlation test suggested no significant publication bias (P > 0.05) and the Egger’s test demonstrated no asymmetry (P > 0.05).\n\nAll five trials compared the primary endpoints in PFO closure versus medical therapy. The different trials used slightly different primary endpoints (Supplementary Table 2), as follows: 1) CLOSURE I trial: stroke, transient ischemic accident (TIA), 30-day mortality, neurology-related death, 2) PC trial: stroke, TIA, death, peripheral embolism, 3) CLOSE trial: fatal or non-fatal stroke, 4) RESPECT trial: nonfatal ischemic stroke, fatal ischemic stroke, or early death after randomization and 5) REDUCE trial: co-primary endpoints of i) ischemic stroke, and ii) new ischemic stroke or silent brain infarction. Among the 1829 subjects who underwent PFO closure, 70 (3.8%) met the primary endpoint (Supplementary Table 3). By contrast, 112 of the 1972 subjects receiving medical therapy (5.7%) met the primary endpoint. Our meta-analysis shows that PFO closure significantly reduced primary endpoint events when compared to medical therapy with a rate ratio [RR] of 0.60 (95% CI: 0.44-0.83, P < 0.0001; I2: 15%) (Figure 2, top panel). Using hazard ratios [HR] from the trials produced negligible differences from our event rate analyses (HR: 0.61; 95% CI: 0.45-0.83, P < 0.01; I2: 0%).\n\nSubgroup analyses were performed for the primary endpoint based on atrial septal aneurysm and shunt size by pooling hazard ratios from the subgroup analyses of the included studies. PFO closure was not significantly better than medical therapy for patients with an atrial septal aneurysm (HR: 0.46; 95% CI: 0.14-1.60, P = 0.22; I2: 62%, Supplementary Figure 15) or without an atrial septal aneurysm (HR: 0.74; 95% CI: 0.47-1.17, P = 0.19; I2: 0%, Supplementary Figure 16). By contrast, PFO closure significantly reduced primary endpoint events in patients with large shunt size (HR: 0.27, 95% CI: 0.14-0.54, P < 0.0001; I2: 0%, Supplementary Figure 17) but not in those with small shunt size (HR: 0.80, 95% CI: 0.49-1.31; P = 0.38; I2: 0%, Supplementary Figure 18)\n\nAll five trials compared the stroke events in PFO closure versus medical therapy. The different trials used slightly different primary endpoints, as follows: 1) CLOSURE I trial: acute focal neurological event that is MR imaging positive, regardless of duration of clinical symptoms, or if imaging cannot be performed for confirmation, it was defined as a persistent focal neurological deficit lasting longer than 24 hours; 2) PC trial: any neurologic deficit lasting for >24 hours typically with documentation in magnet resonance imaging (MRI) or computer tomography (CT); 3) CLOSE trial: sudden onset of focal neurological symptoms with the presence of cerebral infarction in the appropriate territory on brain imaging (CT or MRI), regardless of the duration of the symptoms (less than or greater than 24 hours); 4) RESPECT trial: ischemic stroke was defined as an acute focal neurologic deficit, which was presumed to be due to focal ischemia, and either symptoms that persisted for 24 hours or longer or symptoms that persisted for less than 24 hours but were associated with findings of a new, neuroanatomically relevant, cerebral infarct on MRI or CT; and 5) REDUCE trial: an acute focal neurologic deficit, presumably due to ischemia, that either resulted in clinical symptoms lasting 24 hours or more or was associated with evidence of relevant infarction on MRI or, if MRI could not be performed, CT of the brain. Taken together data from all five trials, stroke occurred in 45 patients (2.5%) in the PFO closure group, but in 102 patients (5.2%) in the medical therapy group. This gave a RR of 0.50 that was statistically significant (95% CI: 0.35-0.73, P < 0.0001; I2: 32%) (Figure 2, middle panel). Using HR from the trials produced negligible differences from our event rate analyses (HR: 0.49, 95% CI: 0.34-0.71, P < 0.01; I2: 30%).\n\nTIAs were assessed in all five trials and the various definitions are shown in Supplementary Table 2. These occurred in 44 patients (2.4%) in the PFO closure group and in 67 patients (3.4%) of the medical therapy group (Supplementary Table 3). There was no statistically significant difference in the RR (0.75; 95% CI: 0.51-1.10, P = 0.14; I2: 0%) (Figure 2, bottom panel). Hazard ratios were available from four trials on TIAs, with a pooled HR of 0.73 (95% CI: 0.49-1.09, P = 0.13; I2: 0%).\n\nAtrial fibrillation or flutter was detected in 76 patients in the PFO closure group (4.2%) and 37 patients (1.9%) in the medical therapy group (Supplementary Table 4). These equated to a significant increase in the risk when PFO closure was used (RR: 1.90, 95% CI: 1.23-2.93, P < 0.01; I2: 43%) (Figure 3, top panel). Subgroup analysis was performed for the type of atrial fibrillation or flutter by dividing the episodes into i) paroxysmal or minor, and ii) permanent or major [as defined by the individual trials]. This revealed that most of the episodes were only paroxysmal or minor for the PFO group (3.0%) when compared to the medical therapy group (0.6%) (RR: 7.70, 95% CI: 2.30-19.77; P < 0.0001; I2: 32%) (Figure 3, middle panel). Permanent or serious atrial fibrillation or flutter occurred in 1.3% in the PFO closure group compared to 0.4% in the medical therapy group without significance between these groups (RR: 2.19, 95% CI: 0.94-5.01; P = 0.07; I2: 0%) (Figure 3, bottom panel).\n\nAll bleeding complications were counted from the included studies (Supplementary Table 5). These were comparable between the groups, occurring in 39 (2.1%) of the PFO closure group and 47 (2.4%) in the medical therapy group, with no significant difference between them (RR: 0.89; 95% CI: 0.44-1.78, P = 0.74; I2: 51%) (Figure 4, top panel). Three trials reported gastrointestinal complications of hemorrhage, ulceration or ulcer perforation (Supplementary Table 5), which occurred in 7 and 8 patients in the PFO closure and medical therapy groups (0.4% for both arms). Therefore, the risk of gastrointestinal complications was not reduced by PFO closure (RR: 0.92; 95% CI: 0.32-2.70, P = 0.88; I2: 0%) (Figure 4, bottom panel).\n\n\nDiscussion\n\nThe key findings of this meta-analysis are that, compared to medical therapy, PFO closure significantly reduced primary endpoints by 40% and strokes by 50%, and had comparable risks of TIAs. Nevertheless, these benefits were observed despite a two-fold increase in the risk of atrial fibrillation or flutter in the PFO closure group. No difference in the risks of bleeding or gastrointestinal complications (bleeding, ulceration or ulcer perforation) was observed.\n\nThe foramen ovale remains open in about 25% of the healthy population, giving rise to a PFO8. PFO can be asymptomatic, but potentially causes cryptogenic strokes mainly through the mechanism of paradoxical embolization. PFO closure is used either for primary or secondary prevention of stroke. It has been proposed that PFO closure is an effective treatment to prevent recurrent stroke or TIA in patients with cryptogenic stroke if the shunt grade of the PFO is greater than moderate9. Long-term follow-up following percutaneous PFO closure for presumed paradoxical embolism have demonstrated very low recurrence rates10,11. Eustachian valve, Chiari’s network, medium-large shunt on trans-esophageal echocardiography, hypertension, age and the Essen stroke risk score have been associated with recurrent neurological events11–13. Medical treatment using antiplatelets or anticoagulants is an acceptable, alternative approach. A recent meta-analysis reported that anticoagulant therapy was more effective than antiplatelet therapy in preventing recurrent stroke and/or transient ischemic attack, but with a 6-fold greater risk of major bleeding14.\n\nThe evidence from real-world studies has also been controversial. A small cohort including 159 patients <55 years old with cryptogenic stroke who received PFO closure or medical therapy did not show a statistically significant difference in the recurrence of ischemic events during a mean follow-up of 51.6 months15. In addition, in another small cohort including 164 patients with PFO and cryptogenic stroke the two groups (PFO closure vs. medical treatment) did not differ in regard to the composite end-point of death, stroke, transient ischemic attack or peripheral embolism16. Similarly, data of the IPSYS registry, which included 521 patients aged 18–45 years old with cryptogenic stroke and PFO, showed no significant difference neither in composite end-point [ischemic stroke, transient ischemic attack, or peripheral embolism (P=0.285)] nor in brain ischemia (p=0.168) between PFO closure and medical treatment groups17. Additionally, Mirzada and colleagues did not find a difference regarding recurrent stroke or TIA between PFO closure and medical treatment groups18. Most of studies about PFO closure included young patients (<55 years old). A recent study, compared the outcomes of PFO closure between a young (<55 years old) and an old group (>55 years old) of patients19. It was found that in older patients, PFO closure was as safe as in younger patients, but recurrent cerebral ischemia was more frequent and likely this is associated to conditions related to age than to paradoxical embolism19.\n\nPrevious meta-analyses of randomized trials have found no statistically significant differences between PFO closure and medical therapy in the prevention of recurrent ischemic stroke20–23, whilst PFO closure was associated with an increased risk of atrial fibrillation20,23. By contrast, others have reported some benefits. For example, a patient-level meta-analysis reported that PFO closure reduced recurrent stroke and had a statistically significant effect on the composite of stroke, TIA, and mortality in adjusted analyses24. Moreover, another reported a 50% relative reduction of stroke and/or TIA versus antiplatelet therapy and by 82% relative reduction of major bleeding versus anticoagulant therapy14, whilst another reported that significant reductions in recurrent neurological events in intention-to-treat, per-protocol and as-treated cohorts25. By pooling together data from two additional trials, our meta-analysis provides a firm conclusion that PFO closure produces a statistically significant reduction in the risk of not only primary endpoints, but also that of strokes. On subgroup analysis, we found that PFO closure significantly reduced primary endpoint events in patients with large shunt size, but not in those with small shunt size. Moreover, although PFO closure was no more effective in reducing TIAs when compared to medical therapy, what is important is that stroke, which is responsible for significant morbidity as well as mortality, was prevented. These benefits are also observed in real-world studies. For example, a long-term propensity score-matched comparison of PFO closure with medical therapy showed a mortality benefit26. These data also raise the issue regarding the benefit of primary prevention in cases with high-risk PFO27. Such a simple intervention might indeed be effective in preventing the first stroke event. Despite the clear benefits, potential complications of PFO closure should be noted. For example, in the RESPECT trial, a significant increase in pulmonary emboli were observed (12 in the PFO closure group vs. 3 in the control group) and deep venous thromboses (5 and 1, respectively). Our meta-analysis also confirmed the increased risk of AF following PFO closure. Its clinical relevance is less certain for two reasons, that 70–90% of these events occurred in the first month and did not persist beyond this timeframe, and stroke incidence was reduced despite this AF occurrence. While the majority of AF occurrences were transient AF, it is not known how much subsequent monitoring these patients underwent or whether they were anticoagulated as a result.\n\nIn our meta-analysis of event rates for primary endpoint(s), low levels of heterogeneity were observed, which was probably due to the different definitions used across the trials. For example, two trials, CLOSURE I7 and PC2, included mortality as part of the primary endpoint, whereas the remaining three trials, CLOSE4, RESPECT3 and REDUCE5, included ischemic stroke events but not mortality. The medical therapy offered was similar across the studies, involving anticoagulation, antiplatelet therapy, or both. For the CLOSE trial, the investigators compared antiplatelet therapy with either anticoagulation therapy or PFO closure. For this particular trial, we had pooled data from both anticoagulation and antiplatelet therapy together, and compared the event rates with the PFO closure group. The present meta-analysis also demonstrates the safety of PFO closure when compared to more conservative medical therapy.\n\nHowever, there are some limitations inherent to the trials themselves. Firstly, the crossover rate is substantial. For example, in REDUCE 6.3% of study subjects crossed over from PFO-closure to medical treatment and 6.3% did the opposite. Secondly, actual mechanical and anatomical closure of the PFO is of paramount importance to prevent recurrent paradoxical embolization. The different trials had different degrees of successful closure (e.g. REDUCE trial: 75%; CLOSURE trial: 86%; CLOSE trial: 93%) but these were not related to outcomes. Secondly, the definitions of TIA were different between the trials, which could have contributed to the between-study heterogeneity. Finally, the trials had large proportions of study subjects who were lost to follow-up, withdrew consent and crossed over to the other study arm, leading to uncertainty in the reported event rates.\n\nThere are many strengths of this meta-analysis study. It is the largest meta-analysis of randomized trials to date, including more than 3800 participants from five trials. Moreover, the follow-up duration was sufficiently long for events to be detected. Our quality analysis indicated that the studies had a low risk of bias. Low levels of heterogeneity were observed for most of our analyses, including primary endpoints, stroke, transient ischemic attacks and gastrointestinal complications. These indicate that it was appropriate to pool these studies together. However, several limitations inherent in the present meta-analysis should be noted. Firstly, the meta-analysis of all-cause bleeding across the trials showed a high level of heterogeneity, which may be clinical in nature, especially when different types of bleeding were measured. Secondly, there is an imbalance between events and lost-to-follow-up/withdrawals28. In most trials, the ratio between the two is often in the region of 5 to 1, but is around 0.3 to 1 in the trials included in this meta-analysis. Moreover, publication bias results should be interpreted with caution as publication bias with less than ten studies is not recommended, but the results are presented here for the sake of completeness. Furthermore, there was a moderate degree of heterogeneity for the atrial septal aneurysm vs. no aneurysm comparison. Since our aim was to compare the effectiveness of PFO closure to medical treatment, we did not perform additional analysis by medical group assignment, such as comparing antiplatelet or anticoagulant therapy to PFO closure. The majority of medical management patients were treated with antiplatelet medications, with a minority treated with oral anticoagulation. Since the presumed mechanism of stroke attributable to PFO is paradoxical embolus from venous thrombi, or in situ thrombus formation, oral anticoagulants may be more effective for those causes. Where reported in individual trials, stroke outcome events were numerically less when medical management was with oral anticoagulation. Further analyses are needed to compare PFO closure with anticoagulation treatment alone as this was beyond the scope of the current study.\n\n\nConclusions\n\nPFO closure significantly reduces the risk of primary endpoints, strokes, but not TIAs, when compared to medical treatment, despite higher rates of atrial fibrillation or flutter being observed. No differences in bleeding or gastrointestinal complications were detected between the two arms. PFO closure should therefore be considered for prevention of stroke in patients with PFO, especially in those presenting with cryptogenic stroke, who are less than 60 years of age with moderate to severe shunting.\n\n\nData availability\n\nAll data required for reproducibility of this study are available from published studies.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nGT and SW are supported by Clinical Assistant Professorship appointments by the Croucher Foundation of Hong Kong.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary File 1: PRISMA checklist.\n\nClick here to access the data.\n\nSupplementary File 2: Supplementary Figures 1–16.\n\nClick here to access the data.\n\nSupplementary File 3: Supplementary Tables 1–5.\n\nClick here to access the data.\n\n\nReferences\n\nFurlan AJ, Reisman M, Massaro J, et al.: Closure or medical therapy for cryptogenic stroke with patent foramen ovale. N Engl J Med. 2012; 366(11): 991–999. PubMed Abstract | Publisher Full Text\n\nMeier B, Kalesan B, Mattle HP, et al.: Percutaneous closure of patent foramen ovale in cryptogenic embolism. N Engl J Med. 2013; 368(12): 1083–91. PubMed Abstract | Publisher Full Text\n\nSaver JL, Carroll JD, Thaler DE, et al.: Long-Term Outcomes of Patent Foramen Ovale Closure or Medical Therapy after Stroke. N Engl J Med. 2017; 377(11): 1022–1032. PubMed Abstract | Publisher Full Text\n\nMas JL, Derumeaux G, Guillon B, et al.: Patent Foramen Ovale Closure or Anticoagulation vs. Antiplatelets after Stroke. N Engl J Med. 2017; 377(11): 1011–1021. PubMed Abstract | Publisher Full Text\n\nSøndergaard L, Kasner SE, Rhodes JF, et al.: Patent Foramen Ovale Closure or Antiplatelet Therapy for Cryptogenic Stroke. N Engl J Med. 2017; 377(11): 1033–1042. PubMed Abstract | Publisher Full Text\n\nCarroll JD, Saver JL, Thaler DE, et al.: Closure of patent foramen ovale versus medical therapy after cryptogenic stroke. N Engl J Med. 2013; 368(12): 1092–100. PubMed Abstract | Publisher Full Text\n\nFurlan AJ, Reisman M, Massaro J, et al.: Closure or medical therapy for cryptogenic stroke with patent foramen ovale. N Engl J Med. 2012; 366(11): 991–999. PubMed Abstract | Publisher Full Text\n\nHagen PT, Scholz DG, Edwards WD: Incidence and size of patent foramen ovale during the first 10 decades of life: an autopsy study of 965 normal hearts. Mayo Clin Proc. 1984; 59(1): 17–20. PubMed Abstract | Publisher Full Text\n\nKim M, Kim S, Moon J, et al.: Effect of patent foramen ovale closure for prevention on recurrent stroke or transient ischemic attack in selected patients with cryptogenic stroke. J Interv Cardiol. 2017. PubMed Abstract | Publisher Full Text\n\nEeckhout E, Martin S, Delabays A, et al.: Very long-term follow-up after percutaneous closure of patent foramen ovale. EuroIntervention. 2015; 10(12): 1474–9. PubMed Abstract | Publisher Full Text\n\nInglessis I, Elmariah S, Rengifo-Moreno PA, et al.: Long-term experience and outcomes with transcatheter closure of patent foramen ovale. JACC Cardiovasc Interv. 2013; 6(11): 1176–83. PubMed Abstract | Publisher Full Text\n\nRudolph V, Augustin J, Hofmann T, et al.: Predictors of recurrent stroke after percutaneous closure of patent foramen ovale. EuroIntervention. 2014; 9(12): 1418–22. PubMed Abstract | Publisher Full Text\n\nRigatelli G, Dell'avvocata F, Braggion G, et al.: Persistent venous valves correlate with increased shunt and multiple preceding cryptogenic embolic events in patients with patent foramen ovale: an intracardiac echocardiographic study. Catheter Cardiovasc Interv. 2008; 72(7): 973–6. PubMed Abstract | Publisher Full Text\n\nPatti G, Pelliccia F, Gaudio C, et al.: Meta-analysis of net long-term benefit of different therapeutic strategies in patients with cryptogenic stroke and patent foramen ovale. Am J Cardiol. 2015; 115(6): 837–43. PubMed Abstract | Publisher Full Text\n\nDanese A, Stegagno C, Tomelleri G, et al.: Clinical outcomes of secondary prevention strategies for young patients with cryptogenic stroke and patent foramen ovale. Acta Cardiol. 2017; 72(4): 410–418. PubMed Abstract | Publisher Full Text\n\nMoon J, Kang WC, Kim S, et al.: Comparison of Outcomes after Device Closure and Medication Alone in Patients with Patent Foramen Ovale and Cryptogenic Stroke in Korean Population. Yonsei Med J. 2016; 57(3): 621–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPezzini A, Grassi M, Lodigiani C, et al.: Propensity Score-Based Analysis of Percutaneous Closure Versus Medical Therapy in Patients With Cryptogenic Stroke and Patent Foramen Ovale: The IPSYS Registry (Italian Project on Stroke in Young Adults). Circ Cardiovasc Interv. 2016; 9(9): pii: e003470. PubMed Abstract | Publisher Full Text\n\nMirzada N, Ladenvall P, Hansson PO, et al.: Recurrent stroke in patients with patent foramen ovale: An observational prospective study of percutaneous closure of PFO versus non-closure. Int J Cardiol. 2015; 195: 293–9. PubMed Abstract | Publisher Full Text\n\nScacciatella P, Meynet I, Presbitero P, et al.: Recurrent cerebral ischemia after patent foramen ovale percutaneous closure in older patients: A two-center registry study. Catheter Cardiovasc Interv. 2016; 87(3): 508–14. PubMed Abstract | Publisher Full Text\n\nLi J, Liu J, Liu M, et al.: Closure versus medical therapy for preventing recurrent stroke in patients with patent foramen ovale and a history of cryptogenic stroke or transient ischemic attack. Cochrane Database Syst Rev. 2015; (9): CD009938. PubMed Abstract | Publisher Full Text\n\nRiaz IB, Dhoble A, Mizyed A, et al.: Transcatheter patent foramen ovale closure versus medical therapy for cryptogenic stroke: a meta-analysis of randomized clinical trials. BMC Cardiovasc Disord. 2013; 13: 116. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSpencer FA, Lopes LC, Kennedy SA, et al.: Systematic review of percutaneous closure versus medical therapy in patients with cryptogenic stroke and patent foramen ovale. BMJ Open. 2014; 4(3): e004282. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUdell JA, Opotowsky AR, Khairy P, et al.: Patent foramen ovale closure vs medical therapy for stroke prevention: meta-analysis of randomized trials and review of heterogeneity in meta-analyses. Can J Cardiol. 2014; 30(10): 1216–24. PubMed Abstract | Publisher Full Text\n\nKent DM, Dahabreh IJ, Ruthazer R, et al.: Device Closure of Patent Foramen Ovale After Stroke: Pooled Analysis of Completed Randomized Trials. J Am Coll Cardiol. 2016; 67(8): 907–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhan AR, Bin Abdulhak AA, Sheikh MA, et al.: Device closure of patent foramen ovale versus medical therapy in cryptogenic stroke: a systematic review and meta-analysis. JACC Cardiovasc Interv. 2013; 6(12): 1316–1323. PubMed Abstract | Publisher Full Text\n\nWahl A, Jüni P, Mono ML, et al.: Long-term propensity score-matched comparison of percutaneous closure of patent foramen ovale with medical treatment after paradoxical embolism. Circulation. 2012; 125(6): 803–12. PubMed Abstract | Publisher Full Text\n\nNietlispach F, Meier B: Percutaneous closure of patent foramen ovale: safe and effective but underutilized. Expert Rev Cardiovasc Ther. 2015; 13(2): 121–3. PubMed Abstract | Publisher Full Text\n\nDellborg M, Eriksson P: Randomized trials of closure of persistent foramen ovale (PFO) vs medical therapy for patients with cryptogenic stroke - Effect of lost-to-follow-up and withdrawal of consent. Int J Cardiol. 2016; 207: 308–9. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "30279",
"date": "01 Feb 2018",
"name": "Bernhard Meier",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have to be commended for their succinct meta-analysis of the recently appearing three papers regarding randomized trials concerning closure of the patent foramen ovale (PFO) for secondary prevention after ischemic events.\nUnder adverse events it should be mentioned that in the RESPECT trial there was a significant increase in pulmonary embolisms (12 in the PFO closure group and 3 in the control group) and deep venous thromboses (5 and 1, respectively). The groin compression after the procedure may be performed too tight or too long and account for some venous thromboses or pulmonary embolisms. Two of the pulmonary embolisms are listed as a complication of the procedure. Nonetheless, this is of concern and merits some discussion.\nI also suggest to briefly raise the question of primary prevention in cases with a high-risk PFO. After all, a relatively simple intervention could and should prevent even the first stroke.\nPlease cite and briefly discuss the 3 similar meta-analyses that have appeared in the past weeks:\nLin et al. (2017)1 Shah et al. (2018)2 de Rosa et al. (2018)3\n\nAre the rationale for, and objectives of, the Systematic Review clearly stated? Yes\n\nAre sufficient details of the methods and analysis provided to allow replication by others? Yes\n\nIs the statistical analysis and its interpretation appropriate? I cannot comment. A qualified statistician is required.\n\nAre the conclusions drawn adequately supported by the results presented in the review? Yes",
"responses": [
{
"c_id": "3843",
"date": "11 Sep 2018",
"name": "Gary Tse",
"role": "Author Response",
"response": "The authors have to be commended for their succinct meta-analysis of the recently appearing three papers regarding randomized trials concerning closure of the patent foramen ovale (PFO) for secondary prevention after ischemic events. Thank you. We have added your suggestion into our ‘discussion’ section as follow: “A recent meta-analysis also reported that in comparison with medical treatment, PFO prevents recurrent stroke and TIA (27). Further, another recent meta-analysis reported that in patients with PFO and cryptogenic stroke, transcatheter device closure decreases risk of recurrent stroke compared with medical therapy alone (28).”Under adverse events it should be mentioned that in the RESPECT trial there was a significant increase in pulmonary embolisms (12 in the PFO closure group and 3 in the control group) and deep venous thromboses (5 and 1, respectively). The groin compression after the procedure may be performed too tight or too long and account for some venous thromboses or pulmonary embolisms. Two of the pulmonary embolisms are listed as a complication of the procedure. Nonetheless, this is of concern and merits some discussion. Thank you. We have added your suggestion into our ‘results’ section as follow: “Venous thromboembolisms, which comprised events of pulmonary embolism and deep venous thrombosis, were also counted from the included studies (Supplementary Table 6). Two trials, RESPECT and REDUCE trials, reported venous thromboembolism, which occurred in 20 and 8 patients in the PFO closure and medical therapy groups.”We have also added your suggestion further into our ‘discussion’ section as follow: “For example, in the RESPECT trial, a significant increase in pulmonary emboli were observed (12 in the PFO closure group, of which 2 are listed as a complication of the procedure, vs. 3 in the control group) and deep venous thromboses (5 and 1, respectively). While the mechanism leading to the increased risk remains unclear, it may be possibly attributable to inappropriate application of groin compression subsequent to PFO intervention.”I also suggest to briefly raise the question of primary prevention in cases with a high-risk PFO. After all, a relatively simple intervention could and should prevent even the first stroke.Thank you very much for your advice. We have added your suggestion into our ‘discussion’ section as follow: “As such, based on the results of our meta-analysis, it supports the need to primarily prevent high-risk PFO with PFO closure procedures instead of providing medical therapy. This approach is further justified by the increasing simplicity and success rates of the PFO closure procedure (31).”Please cite and briefly discuss the 3 similar meta-analyses that have appeared in the past weeks: Lin et al. (2017)1 Shah et al. (2018)2 de Rosa et al. (2018)3 Thank you. We have added your suggestion into our ‘discussion’ section as follow: “A recent meta-analysis also reported that in comparison with medical treatment, PFO prevents recurrent stroke and TIA (27). Further, another recent meta-analysis reported that in patients with PFO and cryptogenic stroke, transcatheter device closure decreases risk of recurrent stroke compared with medical therapy alone (28). By contrast, another meta-analysis reported that PFO reduced the risk of stroke, but not TIA, mortality, major bleeding and increased the risk of AF (29).”"
}
]
},
{
"id": "34739",
"date": "13 Jun 2018",
"name": "Rajkumar Doshi",
"expertise": [
"Reviewer Expertise Mechanical circulatory support devices",
"structural heart diseases",
"percutaneous coronary intervention",
"heart failure"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn their manuscript entitled, “Patent foramen ovale closure versus medical therapy for stroke prevention: A systematic review and meta-analysis of randomized controlled trials” Tse et al. performed meta-analysis comparing patent foramen ovale (PFO) closure device with medical therapy. The authors concluded that PFO closure device reduces the risk of stroke with comparable transient ischemic events. However, PFO closure devices are associated with higher rates of atrial fibrillation/flutter.\n\nThe reviewer would like to commend the authors for their work. The topic is timely as 3 randomized controlled trials were published. Overall, the manuscript is well written, and data is presented clearly. Satisfactory statistical analysis was performed. I have enumerated my concerns below.\n\n1. First line introduction needs a citation: “The association between the presence of a patent foramen ovale (PFO) and cryptogenic stroke has been established by previous case-control studies.”\n\n2. Please use abbreviations at its first use including clinical trial names.\n\n3. As mentioned before, the authors could have provided more information on several more impactful variables including atrial septal aneurysm, size of the shunt. Additionally, effectiveness of the closure was missing1.\n\n4. While listing complication rates, the authors should also focus on venous thromboembolism rates post-procedure.\n\nAre the rationale for, and objectives of, the Systematic Review clearly stated? Yes\n\nAre sufficient details of the methods and analysis provided to allow replication by others? Yes\n\nIs the statistical analysis and its interpretation appropriate? Yes\n\nAre the conclusions drawn adequately supported by the results presented in the review? Yes",
"responses": [
{
"c_id": "3842",
"date": "11 Sep 2018",
"name": "Gary Tse",
"role": "Author Response",
"response": "The reviewer would like to commend the authors for their work. The topic is timely as 3 randomized controlled trials were published. Overall, the manuscript is well written, and data is presented clearly. Satisfactory statistical analysis was performed. I have enumerated my concerns below.Thank you very much for taking your time to provide us with your helpful comments and suggestions, which we have taken on board fully. Please see our responses and changes as detailed below. 1. First line introduction needs a citation: “The association between the presence of a patent foramen ovale (PFO) and cryptogenic stroke has been established by previous case-control studies.”Thank you. We have now added a citation for the first line introduction. 2. Please use abbreviations at its first use including clinical trial names.Thank you. We have now used abbreviations at its first use. 3. As mentioned before, the authors could have provided more information on several more impactful variables including atrial septal aneurysm, size of the shunt. Additionally, effectiveness of the closure was missing1.Thank you for your comment. We have already included your suggestion in our ‘discussion’ section as follow: “Subgroup analyses were performed for the primary endpoint based on atrial septal aneurysm and shunt size by pooling hazard ratios from the subgroup analyses of the included studies. PFO closure was not significantly better than medical therapy for patients with an atrial septal aneurysm (HR: 0.46; 95% CI: 0.14-1.60, P = 0.22; I2: 62%, Supplementary Figure 15) or without an atrial septal aneurysm (HR: 0.74; 95% CI: 0.47-1.17, P = 0.19; I2: 0%, Supplementary Figure 16). By contrast, PFO closure significantly reduced primary endpoint events in patients with large shunt size (HR: 0.27, 95% CI: 0.14-0.54, P < 0.0001; I2: 0%, Supplementary Figure 17) but not in those with small shunt size (HR: 0.80, 95% CI: 0.49-1.31; P = 0.38; I2: 0%, Supplementary Figure 18)” and “The key findings of this meta-analysis are that, compared to medical therapy, PFO closure significantly reduced primary endpoints by 40% and strokes by 50%, and had comparable risks of TIAs. Nevertheless, these benefits were observed despite a two-fold increase in the risk of AF or AFL in the PFO closure group.”We have added your suggestion into our ‘discussion’ section as follow: “Further, another recent meta-analysis reported that in patients with PFO and cryptogenic stroke, transcatheter device closure decreases risk of recurrent stroke compared with medical therapy alone (28).” 4. While listing complication rates, the authors should also focus on venous thromboembolism rates post-procedure.Thank you. We have added your suggestion into our ‘results’ section as follow: “Venous thromboembolisms, which comprised events of pulmonary embolism and deep venous thrombosis, were also counted from the included studies (Supplementary Table 6). Two trials, RESPECT and REDUCE trials, reported venous thromboembolism, which occurred in 20 and 8 patients in the PFO closure and medical therapy groups.”"
}
]
}
] | 1
|
https://f1000research.com/articles/6-2178
|
https://f1000research.com/articles/7-773/v1
|
19 Jun 18
|
{
"type": "Research Article",
"title": "Clinical application of high frequency jet ventilation in stereotactic liver ablations – a methodological study",
"authors": [
"Karolina Galmén",
"Jacob Freedman",
"Grzegorz Toporek",
"Waldemar Goździk",
"Piotr Harbut",
"Karolina Galmén",
"Jacob Freedman",
"Grzegorz Toporek",
"Waldemar Goździk"
],
"abstract": "Background: Computer-assisted navigation during thermal ablation of liver tumours, may help to correct needle placement and improve ablation efficacy in percutaneous, laparoscopic and open interventions. The potential advantage of using high frequency jet-ventilation technique (HFJV) during the procedure is by minimising the amplitude of respiration-related upper-abdominal organs movements. The aim of this clinical methodological trial was to establish whether HFJV would give less ventilatory induced liver movements than conventional ventilation, during stereotactic navigated ablation of liver metastases under open surgery. Methods: Five consecutive patients scheduled for elective, open liver ablation under general propofol and remifentanil anaesthesia were included in the study protocol. During the stereotactic targeting of the tumours, HFJV was chosen for intraoperative lung ventilation. For tracking of liver movement, a rigid marker shield was placed on the liver surface and tracked with an optical position measurement system. A 4D position of the marker shield was measured for HFJV and conventional tidal volume lung ventilation (TV). At each time point the magnitude of liver displacement was calculated as an Euclidean distance between translational component of the marker shield's 3D position and previously estimated centroid of the translational motion. Results: The mean Euclidean liver displacement was 0.80 (0.10) mm for HFJV and 2,90 (1.03) mm for TV with maximum displacement going as far as 12 mm on standard ventilation (p=0.0001). Conclusion: HFJV is a valuable lung ventilation method for patients undergoing stereotactic surgical procedures in general anaesthesia when reduction of organ displacement is crucial.",
"keywords": [
"High frequency jet ventilation",
"Liver ablation",
"Stereotactic surgery"
],
"content": "Introduction\n\nThermal ablation of primary and secondary liver tumours is a potentially curative treatment, and an alternative for patients not eligible for surgical resection due to severe comorbidity or underlying liver disease. Its efficacy has been proven for tumours smaller than 30mm in diameter, especially in treatment of hepatocellular carcinomas1. Adequate imaging of the tumour and precise guidance of the ablation device are crucial for accurate local ablative treatment. Accurate targeting is essential for an effective treatment, reducing the risk for local recurrence and need of retreatment1,2.\n\nRecent developments in image guidance systems, with robotic and computer-assisted navigation, may help correct needle placement and improved ablation efficacy. Needle navigation and placement is based on pre-interventional imaging. Early phantom and clinical experiences with navigation systems suggest good procedural accuracy, reduced procedure time and reduced patient radiation exposure compared to freehand techniques3.\n\nThe high frequency jet-ventilation technique (HFJV) was developed in the seventies by Klain and Smith, and mostly applied in the field of ear-nose-and-throat (ENT) surgery. It does not rely on conventional tidal volumes but uses a high frequency forced gas move4. The potential advantage of using HFJV in abdominal surgery is to minimise the amplitude of respiration-related upper-abdominal organs compared to conventional tidal volume lung ventilation (TV)5,6,7.\n\nThe aim of this clinical methodological trial was to measure the liver movements during open surgery under general anaesthesia and compare HFJV with conventional ventilation.\n\n\nMethods\n\nFive consecutive patients who were scheduled for elective, open liver ablations were included in the clinical protocol.\n\nGeneral anaesthesia was induced and maintained by total intravenous technique (TIVA) with target controlled infusion (TCI - Alaris, PK CareFusion, Sarl, Switzerland) of propofol 2–6µg/ml according to Marsh pharmacokinetic model (Propofol Sandoz®, Sandoz, Copenhagen, Denmark) and remifentanil 2-10ng/ml according to Minto pharmacokinetic model (Ultiva®, GlaxoSmithKline,Solna, Sweden) with muscle relaxation achieved by rocuronium 0,6 mg/kg during induction of anaesthesia, followed by incremental doses of 0,15mg/kg during surgery (Rocuronium, Fresenius Kabi, Uppsala, Sweden).\n\nEndotracheal intubation with a conventional endotracheal (ET) tube was performed at the induction of anaesthesia, followed by the initiation of conventional lung ventilation with pressure control/volume guarantee ventilation (PCV/VG - Aisys Carestation, GE Healthcare, Helsinki, Finland) as well as a lung-protective regime to achieve normo-ventilatory status. Tidal volumes have been calculated after the reduced body weight with 6-7ml/kg target and fixed 5cmH20 positive end-expiratory pressure (PEEP). Laparotomy was performed with a right subcostal incision. A HFJV cannula (LaserJet Catheter, Acutronic Medical Systems AG, Hirzel, Switzerland) was inserted endotracheally with the tip at the end of the ET-tube. HFJV (Monsoon HFJV ventilator, Acutronic Medical Systems, AG, 8816 Hirzel, Switzerland) was then initiated and continued during the liver ablation procedure. HFJV driving pressure (DP) was adjusted downwards, beginning at 1.8 bar, until satisfactory operation-field conditions were reached in accordance with the operating surgeon's assessment.\n\nIn the first phase, after the induction of anaesthesia, end-tidal carbon dioxide tension (EtCO2) was continuously monitored through the use of classical side-stream capnography towards the normocapnic state. During the HFJV phase, sequential measurements were taken with 10 minute intervals (Integrated Monsoon ventilator etCO2 module). After the termination of the last tumour ablation and the completion of liver movement measurement, the conventional lung ventilation was restored. Lastly, the etCO2 measurement was repeated following the same method as the one used at the start of the procedure. Cut-off values for discontinuation of HFJV was either etCO2 rise over 10 kPa, or oxygen de-saturation under 90%. With etCO2 exceeding 8 kPa, the DP down-regulation has been stopped and instead increased by 0.1 bar increments every 5 minutes until the target etCO2 was reached.\n\nPatients were selected at the regional liver multidisciplinary team conference and were regarded as unresectable due to multiple metastases involving too many liver segments, but numbered less than twenty and none larger than thirty millimeters in diameter8. Multiple ablations were then performed using an intraoperative ultrasound and a stereotactic targeting device, CAS-one (Cascination AG, Bern, Switzerland) where a previously acquired computed tomography scan was merged with previous scans in cases of vanished lesions, and a 3D model of the liver reconstructed by MeVIS medical solutions AG (Bremen, Germany) was used as a surgical map with optical navigation of ablation antennae, as previously described8. For tracking of liver movement a rigid marker shield with a set of retroreflective marker spheres was placed on the liver surface in the vicinity of the lower border of segment 4b and tracked with an optical position measurement system (Polaris Vicra, NDI, Canada) incorporated into the CAS-One system which was positioned in the vicinity of the operative field thus providing a constant line of sight. A 4D position of the marker shield was measured for approximately 2–3 minutes for HFJV and conventional ventilation.\n\nAt each time point t, the magnitude of liver displacement d, was calculated as an Euclidean distance between translational component p→ of the marker shield’s 3D position, and previously estimated centroid of the translational motion c¯, i.e. an average translational position of the marker, as listed in the equation below:\n\nd t = ||⬚ c − p→t||\n\nAll displacement errors d were described quantitatively using mean (µ) and standard deviation (σ) as well as a maximum error value. Statistically significant differences were tested with the two-tailed, nonparametric, unpaired t-test, where p < 0.05 was defined as statistically significant.\n\n\nResults\n\nPatient demographics, medical status and extent of surgery is presented in Table 1.\n\nPatient characteristics. ASA=American Society of Anesthesiologists (ASA) Physical Status, BMI=Body Mass Index (kg/m2)\n\nVentilator settings and readings are shown in in Table 2. The following parameters have been registered: end tidal CO2 concentrations before and after HFJV phase, respiratory pressures on conventional tidal volume ventilation before and after HFJV, peak inspiratory pressure and mean airway before and after HFJV, mean airway pressure, dynamic lung compliance both before and after HFJV phase as well as tidal volumes on conventional lung ventilation, at liver displacement measurement point. HFJV ventilator settings: respiratory frequency and target driving pressure as well as the measured respiratory parameters: peak inspiratory pressure and mean airway pressure as well as maximum end tidal carbon dioxide tension on HFJV.\n\nIn one case (patient number 2) an increase in DP was needed, because the etCO2 rose to 8.3 kPa and the optimal value was set on to 1.5 bar where in other four cases it was set to 1.1-1.2 bar.\n\nThe mean Euclidean liver displacement was 0.80 (0.10 SD) mm and 2.90 (1.03 SD) mm for HFJV and TV respectively with maximum displacement going as far as 12 mm on standard ventilation (p=0.0001). Data shown in Figure 1.\n\nDisplacement of measured point on liver surface during High-Frequency Jet Ventilation (HFJV) and standard ventilation (TV) Error bars mark standard deviation.\n\nEnd tidal CO2 (etCO2)registered before and after High frequency jet ventilation (HFJV) phase (etCO2 - pre jet and post jet), Respiratory pressures on conventional tidal volume ventilation before and after HFJV phase, expressed in cm.H2O: peak inspiratory pressure (PeakP) pre jet, mean airway pressure (MaP) pre jet, dynamic lung compliance both before and after HFJV phase (Compliance pre and post jet) as well as tidal volumes on conventional lung ventilation, at liver displacement measurement point (TV post jet). HFJV ventilator settings: respiratory frequency (FQ on jet) and target driving pressure (T-DP on jet) as well as the measured respiratory parameters: peak inspiratory pressure (PIP) and mean airway pressure (MaP) and maximum end tidal carbon dioxide tension during HFJV phase (Max etCO2 on jet).\n\n\nDiscussion\n\nOne of the most important challenges the anaesthesiologist faces perioperatively is the maintaining of the patient’s homeostasis and the facilitating of the course of surgery. In certain clinical situations, such as in stereotactic ablative procedures, it can be difficult to establish since there is a demand for keeping respiratory organ displacement to a minimum.\n\nThe recent investigation provides evidence for the claim that respiration induced liver motion during intervention can be reduced by more than two thirds when using HFJV instead of TV. This is the only study measuring this effect dynamically. This can have a decisive bearing on the risk of local recurrence rates and risk for collateral damage after image guided stereotactic treatment of liver tumours. The benefits in terms of radiation dose and respiratory organ shifting, when using HFJV in interventional radiology has previously been reported from several groups5,9,10, but these were all conducted in the setting of a CT-guided ablation with non-dynamic measurements of target organ displacement.\n\nStereotactic navigation is often performed on rigid registration between the intraoperative target organ with the images obtained before the surgery. With this setting, soft tissue deformation and patient motion will affect the navigation system and can cause significant inaccuracy11. The minimization of deformation-induced errors can be done in several ways. From experimental research point of view, the position of the moving target can be measured by implanted navigation aids or by using electromagnetic tracking devices11,12. Implantation of invasive needles is however not prudent in a clinical setting due to high risk of haemorrhage, tumour seeding, and long-term risks with leaving foreign bodies in situ.\n\nAnother approach is the mathematical modelling of mechanical tissue properties and organ motion in order to predict the target location based on a statistical model derived from preoperative 4D CT. This approach is frequently used in intensity modulated radiation therapies (IMRT)13. The relationship between the respiratory cycle and the movement of a target is however complex to predict and not possible in real time due to highly intensive computational requirements and obvious risks of differences in outcome during the acquisition of preoperative images and a situation with artificial respiration and an open or laparoscopically affected abdomen.\n\nTherefore, respiratory gating methods that reliably reproduce a known breathing stage (temporarily disconnecting endotracheal tube in anaesthetized patients) seem to be a more reliable approach13,14. An overall internal target movement of 1.41 ± 0.75 mm was reported. However, periods of apnea are usually limited to 1–2 minutes depending on the health condition of the patient. HFJV overcomes these restrictions.\n\nUse of HFJV outside the ENT and Thorax suites have been the subject of several, but rather anecdotal reports. In minimally invasive oncological procedures HFJV have been used in percutaneous, laparoscopic as well as in open approaches3,5,9,15. In cardiology it has been beneficial in catheter ablations16. In urology it can be helpful to minimize the numbers of shocks needed during ESWL-treatment16,17.\n\nThe present study is small and though the liver displacement data is solid, further studies on the physiological effects of HFJV is needed to elucidate the limitations. Carbon dioxide control is one of the important aspects of perioperative management. In the treatment protocol established during the study, it remained even more challenging because of the “less is better” strategy, favouring relatively low respiratory driving pressures.\n\nIntroducing HFJV in the management of computer-assisted abdominal surgery to a wider extent remains promising. The wider use of this method is, however, limited by the equipment availability and staff experience. Nevertheless, in the scale of a highly specialized centre, the acceptable skill level can easily be achieved, and the overall cost of the equipment as well as materials and utilities remains reasonable. HFJV is a promising lung ventilation modality for patients undergoing stereotactic surgical procedures in general anaesthesia when reduction of target organ displacement is crucial.\n\n\nEthical considerations\n\nAll procedures performed were in accordance with the ethical standards of the institution at which the studies were conducted. Since this was a retrospective analysis of the clinical material collected before, the written consent has been obtained only from two patients still alive at the time when the decision of data analysis and publication have been made. Other three patients have already died.\n\n\nData availability\n\nDataset 1: Demographic data and ventilation readings. metodological_study_1.xls 10.5256/f1000research.14873.d20721218\n\nDataset 2: Liver positioning data. 14-09-01-open-liver-all.xlsx 10.5256/f1000research.14873.d20721319",
"appendix": "Competing interests\n\n\n\nThe authors declare that they have no conflict of interest.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nZhang M, Ma H, Zhang J, et al.: Comparison of microwave ablation and hepatic resection for hepatocellular carcinoma: a meta-analysis. Onco Targets Ther. 2017; 10: 4829–39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFukuhara T, Aikata H, Hyogo H, et al.: Efficacy of radiofrequency ablation for initial recurrent hepatocellular carcinoma after curative treatment: Comparison with primary cases. Eur J Radiol. 2015; 84(8): 1540–45. PubMed Abstract | Publisher Full Text\n\nEngstrand J, Toporek G, Harbut P, et al.: Stereotactic CT-Guided Percutaneous Microwave Ablation of Liver Tumors With the Use of High-Frequency Jet Ventilation: An Accuracy and Procedural Safety Study. AJR Am J Roentgenol. 2017; 208(1): 193–200. PubMed Abstract | Publisher Full Text\n\nGalmén K, Harbut P, Freedman J, et al.: High frequency jet ventilation for motion management during ablation procedures, a narrative review. Acta Anaesthesiol Scand. 2017; 61(9): 1066–74. PubMed Abstract | Publisher Full Text\n\nBiro P, Spahn DR, Pfammatter T: High-frequency jet ventilation for minimizing breathing-related liver motion during percutaneous radiofrequency ablation of multiple hepatic tumours. Br J Anaesth. 2009; 102(5): 650–3. PubMed Abstract | Publisher Full Text\n\nAbderhalden S, Biro P, Hechelhammer L, et al.: CT-guided navigation of percutaneous hepatic and renal radiofrequency ablation under high-frequency jet ventilation: feasibility study. J Vasc Interv Radiol. 2011; 22(9): 1275–8. PubMed Abstract | Publisher Full Text\n\nFritz P, Kraus HJ, Mühlnickel W, et al.: High-frequency jet ventilation for complete target immobilization and reduction of planning target volume in stereotactic high single-dose irradiation of stage I non-small cell lung cancer and lung metastases. Int J Radiat Oncol Biol Phys. 2010; 78(1): 136–42. PubMed Abstract | Publisher Full Text\n\nEngstrand J, Nilsson H, Jansson A, et al.: A multiple microwave ablation strategy in patients with initially unresectable colorectal cancer liver metastases - A safety and feasibility study of a new concept. Eur J Surg Oncol. 2014; 40(11): 1488–93. PubMed Abstract | Publisher Full Text\n\nAbderhalden S, Biro P, Hechelhammer L, et al.: CT-guided navigation of percutaneous hepatic and renal radiofrequency ablation under high-frequency jet ventilation: feasibility study. J Vasc Interv Radiol. 2011; 22(9): 1275–8. PubMed Abstract | Publisher Full Text\n\nDenys A, Lachenal Y, Duran R, et al.: Use of high-frequency jet ventilation for percutaneous tumor ablation. Cardiovasc Intervent Radiol. 2014; 37(1): 140–6. PubMed Abstract | Publisher Full Text\n\nClifford MA, Banovac F, Levy E, et al.: Assessment of hepatic motion secondary to respiration for computer assisted interventions. Comput Aided Surg. 2002; 7(5): 291–9. PubMed Abstract | Publisher Full Text\n\nMaier-Hein L, Müller SA, Pianka F, et al.: Respiratory motion compensation for CT-guided interventions in the liver. Comput Aided Surg. 2008; 13(3): 125–38. PubMed Abstract | Publisher Full Text\n\nMatney JE, Parker BC, Neck DW, et al.: Target localization accuracy in a respiratory phantom using BrainLab ExacTrac and 4DCT imaging. J Appl Clin Med Phys. 2011; 12(2): 3296. PubMed Abstract | Publisher Full Text\n\nWidmann G, Schullian P, Haidu M, et al.: Respiratory motion control for stereotactic and robotic liver interventions. Int J Med Robot. 2010; 6(3): 343–9. PubMed Abstract | Publisher Full Text\n\nStillström D, Nilsson H, Jesse M, et al.: A new technique for minimally invasive irreversible electroporation of tumors in the head and body of the pancreas. Surg Endosc. 2017; 31(4): 1982–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoode JS Jr, Taylor RL, Buffington CW, et al.: High-frequency jet ventilation: Utility in posterior left atrial catheter ablation. Heart Rhythm. 2006; 3(1): 13–9. PubMed Abstract | Publisher Full Text\n\nCormack JR, Hui R, Olive D, et al.: Comparison of two ventilation techniques during general anesthesia for extracorporeal shock wave lithotripsy: high-frequency jet ventilation versus spontaneous ventilation with a laryngeal mask airway. Urology. 2007; 70(1): 7–10. PubMed Abstract | Publisher Full Text\n\nGalmén K, Freedman J, Toporek G, et al.: Dataset 1 in: Clinical application of high frequency jet ventilation in stereotactic liver ablations – a methodological study. F1000Research. 2018. Data Source\n\nGalmén K, Freedman J, Toporek G, et al.: Dataset 2 in: Clinical application of high frequency jet ventilation in stereotactic liver ablations – a methodological study. F1000Research. 2018. Data Source"
}
|
[
{
"id": "36747",
"date": "06 Aug 2018",
"name": "Per Sandström",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGalmén et al have written an important paper comparing liver movement during computer-assisted liver thermal ablation using high frequency jet-ventilation (HFJV) compared to conventional ventilation. The study includes 5 patients treated with open liver tumor ablations and both methods are used in each patient showing a significant and clinically relevant reduced liver movement with HFJV. As pointed out by the authors, liver displacement must be as small as possible when performing liver ablations, to reduce the risk of missing the lesion.\n\nA few minor comments\nIn the method section it is stated that only patients with less then 20 tumors were included, but according to table 1. patient number 5 had 30 lesions, maybe this should be corrected.\nThe reason for treating these patients with open surgery is unclear to me and this could possibly be explained in the method section.\nIn the discussion the abbreviation ENT is not explained but may possibly be of value for some readers.\n\nIt would be most interesting to see the same kind of study performed in the percutaneous setting.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3904",
"date": "29 Aug 2018",
"name": "Piotr Harbut",
"role": "Author Response",
"response": "We thank professor Sandstrom for his comments and the opportunity to clearify some points.1. One patient turned out to have more than 20 metastases at time of surgery, but not on the imageing where the treatment was allocated. Rather than to close up, a new evaluation was done during surgery using intraoperative ultrasound and it was found that ablations could be done leaving an adequate volume of functioning liver parenchyma, why this was the course taken.2. Open surgery was used because at this time the hardware and software used for lesion tracking was only adapted for open surgery. We now use a laparoscopic approach instead and have done for a few years.3. ENT is of course ear, nose and throat. This should have been spelled out in the text.4. A dynamic study of liver motion with respiration could indeed be studied using flouroscopy or even better, tracking an electromagnetic intravascular probe introduced as far peripherally as possible in the liver. We have not done this but the suggestion is very good since this is an obvious problem during CT-guided interventions."
}
]
},
{
"id": "37294",
"date": "28 Aug 2018",
"name": "Janusz Trzebicki",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe application of hight frequency jet ventilation (HFJV) during stereotactic thermalablation of liver tumours is an interesting method for minimising respiration related movements of this organ. The authors presented the results of a methodological study which includes five patients who underwent open computer-assisted liver thermalablation under general anesthesia using HFJV. The aim of this study was to compare HFJV with conventional ventilation (CV) in terms of influence of these techniques of ventilation on respiration related movements of liver. The liver movements range during both methods of ventilation were measured and compared in every patient. The applied measurement method allowed safe assessment of the dynamic movements of the organ. The results showed that HFJV in comparison to CV significantly reduced liver movements. The authors concluded that HFJV is a promising method of lung ventilation for patients qualified for surgery when reduction of target organ displacement is crucial.\n\nApplication of HFJV instead CV allowed better immobilisation of liver and therefore it may allow us to perform ablation safely and more effectively. So far, only a few articles present this topic and the analysed groups of patients are limited. That is why we need more well-designed studies evaluating HFJV during ablation procedures, such as the study written by Galmen et al. In my opinion it is an important and correctly conducted study. The authors developed and described the new methodology of dynamic measurements of intraoperative liver movements related to lung ventilation. This study provides new arguments for using HFJV for patients undergoing during stereotactic thermal ablation of liver tumours or others organs.\n\nMinor comments:\nOn figure 1.\nThe term \"standard ventilation” has two different abbreviations in the article: on the chart SV abbreviation is used while in the description below we may find TV.\nTo include patient in the study the number of metastases (as stayed in method section) should be less than 20, whereas one patient (table 1, ID-5) had 30.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3936",
"date": "31 Aug 2018",
"name": "Piotr Harbut",
"role": "Author Response",
"response": "We would like to thank professor Trzebicki for his positiv input. The suggested terminological changes in definition of conventional lung ventilation will be done in the manuscript soon. We choose the TV abbreviation for tidal volume ventilation. The second comment concerning the number of metastases in case 5 was the same as professor Sandströms and the appopriate explanation will be done in Discussion chapter."
}
]
}
] | 1
|
https://f1000research.com/articles/7-773
|
https://f1000research.com/articles/7-1446/v1
|
10 Sep 18
|
{
"type": "Data Note",
"title": "Germination rates of four Chilean forest trees seeds: Quillaja saponaria, Prosopis chilensis, Vachellia caven, and Caesalpinia spinosa",
"authors": [
"Âlvaro Plaza",
"Miguel Castillo",
"Miguel Castillo"
],
"abstract": "Data on the germination rates of four tree species, natively founded in the Chilean Mediterranean-climate zone, were determined by germination in crop chambers. The obtained data were used to interpolate or extrapolate the time taken for 50% of seeds to germinate in each case. These results are useful for regional native forest research and, in a broad sense, for its use in models to study germination dynamics in Mediterranean-climate zones.",
"keywords": [
"germination",
"native forest",
"Mediterranean-climate zone"
],
"content": "Introduction\n\nKnowledge of the germination rates of a species means that future determination of this rate is unnecessary, preventing the waste of time and seeds.\n\nQuillaja saponaria and Vachellia caven are two of the most representative trees in the Chilean Mediterranean forest (Perez-Quezada & Bown, 2015), so information about these species will be useful for ecological investigation and restoration. Prosopis chilensis is vulnerable in the wild and is a key species of its community (Valdivia & Romero, 2013); data about its propagation is important for conservation biologists.\n\nIn this article, we present the germination rates of seeds of Q. saponaria, P. chilensis, V. caven, and Caesalpinia spinosa. Dataset 1 contains the raw data from which these germination rates are calculated (Plaza & Castillo, 2018).\n\n\nMethods\n\nAll seeds were collected from adult trees. Q. saponaria seeds were collected in VIII Región, Chile; seeds from V. caven, C. spinosa and P. chilensis were from Región Metropolitana, Chile. The seeds were collected between February and April 2017. Information about collection was obtained from the seed provider, CESAF Antumapu, http://cesaf.forestaluchile.cl/.\n\nTable 1 and Table 2 specify the initial number of seeds per plate and the percentage of germinated seeds in some days are shown. Figure 1 shows the obtained values of time taken for 50% of seeds to germinate (TG50).\n\nInterpolation of Q. saponaria (A), P. chilensis (B) and V. caven TG50 (C), and extrapolation of C. spinosa TG50 (D).\n\nPretreatment conditions were suggested by the provider. Briefly, seeds of Q. saponaria were hydrated in tap water overnight. Seeds of P. chilensis were scarified in 95–97%, analytical grade H2SO4 for 10 minutes and then hydrated in tap water overnight. Seeds of V. caven were scarified in 95–97%, analytical grade H2SO4 for 90 minutes and then hydrated in tap water overnight. Seeds of C. spinosa were scarified in 95–97%, analytical grade H2SO4 for 30 minutes and then hydrated in tap water overnight.\n\nActivated seeds of Q. saponaria, P. chilensis, V. caven, and C. spinosa were placed in Petri plates over a filter paper bed (3 plates per species). Filter paper was then hydrated with distilled water. All plates were incubated in a crop chamber at 20°C, with light/dark cycles of 9 h/15 h. Germination is conditioned by temperature, so altering this factor could completely change the germination rates (Giuliani et al., 2015).\n\nPlates were monitored periodically to count the germinated seeds and refill distilled water. Q. saponaria and P. chilensis plates were monitored until day 19 (Table 1). After that, fungal development made it difficult to check the plates, and a tactile examination of seeds indicated that most of them were rotten.\n\nPlates containing V. caven and C. spinosa were more resistant to contamination and could be monitored until day 22. After this point, germination was too slow, and it was decided to end the experiment. Results are shown in Table 2.\n\nThe sample size, provided in the tables, is considered important for the replicability of a germination assay (Ribeiro-Oliveira & Ranal, 2016).\n\nFor Q. saponaria, P. chilensis and V. caven, the TG50 was linearly interpolated from the two closest points (Figure 1A–C). C. spinosa didn’t reach the 50% germination during the assay, so this was extrapolated using the last five points (Figure 1D). The TG50 of Q. saponaria was 4.9 days. P. chilensis had the fastest germination (TG50 = 1.7 days); V. caven had a TG50 of approximately 3.9 days, and the TG50 of C. spinosa was estimated to be 25.8 days.\n\n\nData availability\n\nDataset 1. Raw number of germinated seeds for each species, each repeat plate and each time point. Also included are cumulative number of germinated seeds, percentages of germinated seeds and calculation of the TG50 for each species. DOI: https://doi.org/10.5256/f1000research.16091.d216429 (Plaza & Castillo, 2018).",
"appendix": "Grant information\n\nThis work was supported by CONAF project 008/2016 \"Pautas de terreno para la restauración de formaciones esclerófilas afectadas por incendios forestales. Regiones V, Metropolitana, VI y VII\", and CONICYT-PCHA/MagísterNacional/2016 – 22161077.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nGiuliani C, Lazzaro L, Mariotti Lippi M, et al.: Temperature-related effects on the germination capacity of black locust (Robinia pseudoacacia L., Fabaceae) seeds. Folia Geobot. 2015; 50(3): 275–282. Publisher Full Text\n\nPerez-Quezada J, Bown H: Guía para la restauración de los ecosistemas andinos de Santiago. Universidad de Chile-CONAF, Santiago, 2015. Publisher Full Text\n\nPlaza Â, Castillo M: Dataset 1 in: Germination rates of four Chilean forest trees seeds: Quillaja saponaria, Prosopis chilensis, Vachellia caven, and Caesalpinia spinosa. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16091.d216429\n\nRibeiro-Oliveira J, Ranal MA: Sample size in studies on the germination process. Botany. 2016; 94(2): 103–115. Publisher Full Text\n\nValdivia C, Romero C: En la senda de la extinción: el caso del algarrobo Prosopis chilensis (Fabaceae) y el bosque espinoso en la Región Metropolitana de Chile central. Gayana Bot. 2013; 70(1): 57–65. Publisher Full Text"
}
|
[
{
"id": "39829",
"date": "08 Nov 2018",
"name": "Gabriela Saldías",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe document provides valuable information on the germination rate of four native species. Quillaja saponaria and Vachellia caven are two of the most representative trees of the Chilean Mediterranean forest, Prosopis chilensis is in the category of threat and Caesalpinia spinosa adapts well to the conditions of the central zone of the country. The protocols for the collection of fruits and seeds, as well as the applied pre-germinative treatments, were based on methodologies recommended by the Centro de Semillas de la Universidad de Chile, CESAF Antumapu. Although they are described in the text, it would be convenient to add the references of Gold et al. (20041) and INFOR (20152) that complements the antecedents in pre-germination treatments for the species under study.\nWith regard to the results obtained, knowing the time required to achieve 50% of seed germination is a useful fact that helps to plan the work of plant reproduction for research and ecological restoration purposes as proposed by the authors.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": []
},
{
"id": "39828",
"date": "23 Nov 2018",
"name": "Madelaine Quiroz Espinoza",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors conducted germination trials on seeds of four Chilean forest tree species. They indicate that this information could be useful for conservation and restoration studies. Regarding the methodology, the description of pretreatments and germination trials stands out. The study design is appropriate and the work is technically sound. I believe that the authors could increase the number of replicates (Petri plate), but in this case it is correct as a first approximation to determine the germination rates of Chilean forest tree seeds.\nThe manuscript represents a useful contribution to the theme of the germination of Chilean forest trees seeds and deserves to be indexed.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": []
},
{
"id": "40998",
"date": "11 Dec 2018",
"name": "Diana Soriano",
"expertise": [
"Reviewer Expertise Plant Eco physiology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors present germination data for four species from the Chilean Mediterranean forest. I would like to focus on the first statement of the introduction:\n\n“Knowledge of the germination rates of a species means that future determination of this rate is unnecessary, preventing the waste of time and seeds”.\nI think this statement is not accurate. Germination rate, especially in wild species, could be different depending on the cohort of seeds, the years of collection and the location. Data showed in this paper are valuable as a single biological replicate of germination behavior of the species used in the studies but it is necessary to add more biological replicates (different years and locations) to have a better understanding of germination behavior of the studied species.\nMethods:\nI would like to know from how many trees seeds were collected.\n\nTG50 calculation could be more easily reproduced if the authors fit their data to a model (p/e sigmoid) and calculated first maximum derivate.\n\nIs the rationale for creating the dataset(s) clearly described? Partly\n\nAre the protocols appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and materials provided to allow replication by others? Partly\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": []
},
{
"id": "40992",
"date": "18 Dec 2018",
"name": "Rafael Rubio de Cases",
"expertise": [
"Reviewer Expertise Plant evolutionary ecology"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a small study, containing very little data. Sample sizes of only three plates (n=3) per species are insufficient to draw conclusions. With such a small sample size, all one could hope would be to draw conclusions at the population level, but that is all but impossible since there is almost no information on the populations of origin, and designations such as “VIII Región, Chile\" or \"Región Metropolitana, Chile” are unintelligible for anyone not familiar with Chilean geography.\nAlso, some explanation of the pre-treatments would be useful. Why does it need pre-treatment? Why that pre-treatment specifically? Do the authors have data from un-treated seeds?\nChemical scarification is expected to have influenced GT50. Specifically, to shorten it, since scarified seeds germinate faster. Therefore, it is not clear how meaningful these figures are. Moreover, GT50 calculations are a little obscure as presented. Were unviable seeds taken into account? There was a sizeable amount of seeds lost to fungi. Where those discarded? Please specify.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and materials provided to allow replication by others? Partly\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1446
|
https://f1000research.com/articles/7-1023/v1
|
09 Jul 18
|
{
"type": "Research Note",
"title": "Analysis of the complete genome of hepatitis B virus subgenotype C2 isolate NHB17965 from a patient with uncomplicated chronicity",
"authors": [
"Modhusudon Shaha",
"Palash Kumar Sarker",
"Md. Saddam Hossain",
"Keshob Chandra Das",
"Munira Jahan",
"Shuvra Kanti Dey",
"Shahina Tabassum",
"Abu Hashem",
"Md. Salimullah",
"Modhusudon Shaha",
"Palash Kumar Sarker",
"Md. Saddam Hossain",
"Keshob Chandra Das",
"Munira Jahan",
"Shuvra Kanti Dey",
"Shahina Tabassum",
"Abu Hashem"
],
"abstract": "The number of chronic cases of hepatitis B virus (HBV) is increasing rapidly in the world. Herein, we report a complete genome of HBV subgenotype C2 (HBV/C2) with current common amino acid substitutions from a patient with chronic HBV without liver complications. Complete genome analysis revealed that the isolated strain was a non-recombinant wild type and had several regular substitutions in the reverse transcriptase domain and small surface proteins of HBV. The isolated complete sequence could be considered as a chronic reference strain of HBV/C2 in Bangladesh. This study may help clinicians and scientists gain in-depth knowledge on common substitutions of HBV/C2 genome and to identify potential therapies against chronic HBV infections.",
"keywords": [
"HBV/C2",
"Chronic",
"Non-recombinant",
"Bangladesh"
],
"content": "Introduction\n\nThe number of cases of chronic liver disease caused by hepatitis B virus (HBV) are increasing markedly1. Globally, more than 2 billion people have been infected by HBV2 and, according to the World Health Organization (WHO) approximately 257 million were living with HBV in 2017. In Bangladesh, the rate of HBV chronicity is 2–6%3, which makes it relatively higher risk than other infectious diseases.\n\nHBV genome comprises a partially double-stranded covalently closed circular DNA that encodes four highly overlapping major open reading frames4. Due to the absence of proof-reading activity, the mutation rate of HBV is high5; hence, recombinant strains are evolving with a common pattern. Most of the HBV cases are chronic, which has a high possibility of causing liver cirrhosis6 and hepatocellular carcinoma7. In Bangladesh, there are no reported reference complete sequence of HBV chronic strain of subgenotype C2. Hence, we isolated the complete genome of a HBV/C2 strain collected from a patient without liver complications, though carrying the virus for a long time.\n\n\nMethods\n\nAn HBV-positive plasma sample was collected from a 45-year-old male patient in a tertiary hospital in Dhaka, Bangladesh after obtaining the patient’s written informed consent. The infected patient had chronic liver disease, as determined by ultrasonography. The patient was diagnosed with chronic HBV infection recently, with a high viral load in the plasma. However, the patient was not showing signs of jaundice, though was affected by fever, nausea, vomiting and fatigue. The study was approved by the Research Ethics Committee of National Institute of Biotechnology, Bangladesh (NIBREC2015-01). The patient was not taking any antiviral therapy and was diagnosed 1 month prior to obtainment of the plasma sample. HBV DNA was extracted from the sample using the QIAamp MinElute Virus Spin kit (Qiagen, Germany). The complete HBV genome was amplified by six sets of primer pairs used previously in another study8 using a conventional PCR method. The primer sequences and their annealing temperatures were as follows: set 1, forward- AAGCTCTGCTAGATCCCAGAGT, reverse- AGTTGGCGAGAAAGTGAAAGCCTG, 56°C; set 2, forward- CCTATTGATTGGAAAGTATGTCA, reverse- AACAGACCAATTTATGCCTA, 48°C; set 3, forward- GAGACCACCGTGAACGCCCA, reverse- CCTGAGTGCTGTATGGTGAGG, 56°C; set 4, forward- TTCACCTCTGCCTAATCATC, reverse- ATAGGGGCATTTGGTGGTCT, 52°C; set 5, forward- TCAGGCAACTATTGTGGTTTCA, reverse- GGGTTGAAGTCCCAATCTGGATT, 51°C; set 6, forward- GGGTCACCATATTCTTGGGAA, reverse- CGAGTCTAGACTCTGTGGTA, 51°C. For a mixture of 25 µl reaction volume, 12.5 µl of 2X MasterMix (Thermo Fisher Scientific, USA), 1 µl each of forward and reverse primers (IDT, USA), 9.5 µl of nuclease-free water (Thermo Fisher Scientific, USA) and 2 µl of template DNA were used. The condition of the PCR reaction was 1 cycle at 95°C for 10 min, 35 cycles at 95°C for 1 min, with the aforementioned annealing temperatures for 1 min and 72°C for 1 min, and a final cycle for 10 min at 72°C. Sanger sequencing was performed using the BigDye Terminator version 3.1 cycling sequencing kit (Applied Biosystems, USA) by ABI 3130 Genetic Analyser (SeqGen, CA, USA) and by thermal cycler (Sigma-Aldrich, Germany) using the described annealing temperatures as per manufacturer’s instructions after the purification of PCR products using PureLink PCR Purification Kit (Thermo Fisher Scientific, USA), performed in accordance with the manufacturer’s protocol. Next, the sequenced contigs were assembled using the Seqman tool of DNASTAR Lasergene version 7.29.\n\nThe subgenotyping and mutation analysis of the sequenced genome were performed using the HBV Geno2Pheno tool version 2 using the default parameters, comparing against the HBV genotype D consensus sequence. Recombination analysis of the sequence was performed using the NCBI genotyping tool. The complete genome was deposited in the GenBank under the accession number MH220971.\n\n\nResults and discussion\n\nAnalysis of the complete genome denotes that the isolate studied here, termed NHB17965, comprises HBV genotype C and subgenotype C2 (HBV/C2) with a GC content of 48.77%. Recombination analysis using the NCBI Genotyping tool showed that NHB17965 is a non-recombinant wild-type HBV isolate (Figure 1).\n\nThe Simplot diagram was generated using the NCBI Genotyping tool.\n\nIsolate NHB17965 was observed to have amino acid substitutions H9Y, N13H, I91L, P109S, T128N, I269L and V278I in the polymerase domain and S53L, P120T, I126T and S210N in the small hepatitis B surface protein as analysed by HBV Geno2Pheno tool, compared against the HBV genotype D consensus sequence. These substitutions may be the results of regular genomic changes to HBV because of a lack of proof-reading activity of the viral reverse transcriptase, and may not signify any danger. Hence, the isolate NHB17965 could be considered as a reference strain of chronic HBV/C2 infection in Bangladesh.\n\nThe findings of this study will help clinicians and scientists to gain substantial knowledge about the current common genomic substitutions of HBV/C2 and to develop treatments against chronic HBV infections.\n\n\nData availability\n\nGenome of the HBV strain isolated in this study, MH220971.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe study was supported by the National Institute of Biotechnology, Ministry of Science and Technology, Bangladesh.\n\n\nReferences\n\nMacLachlan JH, Cowie BC: Hepatitis B virus epidemiology. Cold Spring Harb Perspect Med. 2015; 5(5): a021410. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShaha M, Hoque SA, Rahman SR: Molecular epidemiology of hepatitis B virus isolated from Bangladesh. SpringerPlus. 2016; 5(1): 1513. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMunshi SU, Tran TTT, Vo TNT, et al.: Molecular characterization of hepatitis B virus in Bangladesh reveals a highly recombinant population. PLoS One. 2017; 12(12): e0188944. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShaha M, Das KC, Hossain MS, et al.: Complete Genome Sequence of a Circulating Hepatitis B Virus Genotype C Strain Isolated from a Chronically Infected Patient Identified at an Outdoor Hospital in Bangladesh. Genome Announc. 2018; 6(9): pii: e01601-17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaligiuri P, Cerruti R, Icardi G, et al.: Overview of hepatitis B virus mutations and their implications in the management of infection. World J Gastroenterol. 2016; 22(1): 145–154. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLin J, Wu JF, Zhang Q, et al.: Virus-related liver cirrhosis: molecular basis and therapeutic options. World J Gastroenterol. 2014; 20(21): 6457–6469. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDi Bisceglie AM: Hepatitis B and hepatocellular carcinoma. Hepatology. 2009; 49(5 Suppl): S56–S60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSugauchi F, Mizokami M, Orito E, et al.: A novel variant genotype C of hepatitis B virus identified in isolates from Australian Aborigines: complete genome sequence and phylogenetic relatedness. J Gen Virol. 2001; 82(Pt 4): 883–892. PubMed Abstract | Publisher Full Text\n\nBurland TG: DNASTAR's Lasergene sequence analysis software. Methods Mol Biol. 2000; 132: 71–91. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "35849",
"date": "23 Jul 2018",
"name": "Mohammad Ariful Islam",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe overall level of the paper is good: even if it is quite simple, it is well written and some important considerations are highlighted.\nThe manuscript talks about the complete genome analysis of hepatitis B virus and represent the complete genome sequence of HBV subgenotype C2 in Bangladesh. The study is a short research note. Although the findings of the study is limited and like a genome announcement, it signifies to be documented. The study is scientifically acceptable. I have some minor queries\nwhat does the isolate NHB17965 indicates? any reference sequences used in this study? If used, what are these?”\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3864",
"date": "01 Aug 2018",
"name": "Modhusudon Shaha",
"role": "Author Response",
"response": "We would like to thank the reviewer for his constructive comments on the manuscript.NHB17965 indicates the sample identification number given by the laboratory. There are no reference sequences used in this study for the analysis. The isolated sequence was analyzed using NCBI Genotyping tool and Geno2Pheno tools as given in the manuscript."
}
]
},
{
"id": "36692",
"date": "07 Aug 2018",
"name": "Paul Klapper",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper reports a sequence analysis of a strain of hepatitis B virus. However, there are some aspects of the paper that merit attention. In the Abstract and again in the Introduction the authors state: The number of chronic cases of hepatitis B virus (HBV) is increasing rapidly in the world\". I found this an interesting statement and wondered what was the evidence base for this. In the Introduction the authors cite MacLachlan and Cowie (2015). It appears to me that the authors are mis-quoting this reference. The reference actually says \"The burden of chronic HBV infection is increasingly being recognized\", this is substantially different from suggesting that the number of cases is increasing. There is a general lack of comprehensive epidemiological information on chronic hepatitis B infection as many developing countries (the epicentre of chronic HBV infections) lack surveillance to provide data. We do not know how WHO programmes to prevent vertical transmission of HBV are impacting on chronic hepatitis B and I believe we lack information to support the assertion of a global increase in numbers.\nAlso in the abstract - the third line \"\"with current common amino acid substitutions\" is not a meaninglful statement. The sentence needs rewriting to make clear what the authors actually mean.\nIntroduction 1st paragraph last 2 lines: \"which makes it relatively higher risk than other infectious diseases\" ; it is unclear what is meant here, a rate of 2-6% is clearly low compared with risk of, for example, influenza or rotavirus infection.\nIntroduction, 2nd Paragraph \"the mutation rate of hepatitis B is high\" and \" hence, recombinant strains are evolving with a common pattern\". Mutation is a random event how can a random event lead to a commonly evolving pattern. The authors need to recast the sentence to explain what they really mean.\nIntroduction, end of 2nd paragraph. the genome was isolated from \"a patient without liver complication\" yet in methods we are told the patient has chronic liver disease as adjudged by ultrasonography.\nMethods: line 4. Was formal staging not used to describe liver disease e.g. relating shear wave elastography to fibrosis score? Were liver function tests performed?\nMethods, line 5 \"The patient was diagnosed with chronic hepatitis B recently\". This seems strange, the standard definition of chronic hepatitis B infection is the detection of hepatitis B surface antigen in serum for more than 6 months. This patient - with blood taken only one month post diagnosis would not seem to meet the definition. No hepatitis B markers results are given for the patient and so understanding of the phase of chronic illness (see EASL guidance;Journal of Hepatology 2017 vol. 67: 370–398) is not possible\nAnalysis - reference needed for the HBV Geno2Pheno software used.\nWhat evidence is there that the HBV/C2 isolate sequenced is a common phenotype in chronic hepatitis B virus infection in Bangladesh? Without such evidence it is difficult to see how the conclusions \"The findings of this study will help....\" are justified. This could simply be an single instance of this virus produced through random mutation.It is therefore also difficult to understand how this could be considered a 'reference strain' for chronic hepatitis B as it may represent a single instance and further as the patient does not appear to meet a case definition of having chronic hepatitis B infection, can it be considered as a reference strain for chronic hepatitis B infection.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3885",
"date": "13 Aug 2018",
"name": "Modhusudon Shaha",
"role": "Author Response",
"response": "We would like to thank the reviewer for his constructive comments on the manuscript. Herein, the responses to the comments are given. ‘The number of chronic cases of hepatitis B virus (HBV) is increasing rapidly in the world’- The sentence is corrected in the revised manuscript. “with current common amino acid substitutions”- this portion of the sentence is removed from the revised manuscript. \"which makes it relatively higher risk than other infectious diseases\"- the sentence is re-written with substantial clarity. \"the mutation rate of hepatitis B is high\" and \" hence, recombinant strains are evolving with a common pattern\"- the sentence is re-written in the revised manuscript. \"a patient without liver complication\"- the sentence is corrected in the revised manuscript. Particular liver function tests were not performed. However, the liver was observed normal using ultrasonography. \"The patient was diagnosed with chronic hepatitis B recently\"- The sentence is edited in the revised manuscript. The detail of diagnosis of the chronicity is described in the revised manuscript. Reference is given that study used the HBV Geno2Pheno tool. \"The findings of this study will help....\"- the sentence is re-written in the revised manuscript."
}
]
}
] | 1
|
https://f1000research.com/articles/7-1023
|
https://f1000research.com/articles/7-511/v1
|
27 Apr 18
|
{
"type": "Research Note",
"title": "Sleep and BMI: Do (Fitbit) bands aid?",
"authors": [
"Laura McDonald",
"Faisal Mehmud",
"Sreeram V. Ramagopalan",
"Laura McDonald",
"Faisal Mehmud"
],
"abstract": "Recent studies have used mainstream consumer devices (Fitbit) to assess sleep objectively and test the well documented association between sleep and body mass index (BMI). In order to further investigate the applicability of Fitbit data for biomedical research across the globe, we analysed openly available Fitbit data from a largely Chinese population. We found that after adjusting for age, gender, race, and average number of steps taken per day, average hours of sleep per day was negatively associated with BMI (p=0.02), further demonstrating the significant potential for wearables in international scientific research.",
"keywords": [
"sleep",
"BMI",
"fitbit",
"wearable"
],
"content": "Introduction\n\nThe association between sleep and body mass index (BMI) is well known1. Recently Xu and colleagues2 showed that shorter sleep duration, as measured by a Fitbit wristband, was associated with a higher average BMI2. These results importantly show the potential value of mainstream consumer devices for scientific research by providing objective sleep and physical activity data. A limitation of the Xu et al. study however, as noted by the authors2, is the lack of diversity of ethnicity in their study population, with the majority of participants being of European descent. In order to assess the utility of wearables for global research we used data from a recently published study3 to investigate the relationship between sleep and BMI in a largely Chinese population.\n\n\nMethods\n\nData was obtained from the study by Lim and colleagues3. In brief, this study generated Fitbit Charge heart rate (HR) data from a cohort of volunteers tracked for a median duration of 4 days3. The volunteers underwent comprehensive profiling including activity tracking (step count and sleep tracking) using the Fitbit Charge HR wearable sensor and BMI measurement at day of recruitment. From the total cohort of 233 individuals contributing data3, association analyses were conducted on subjects who had valid measurements for all metric types and who had more than one day of sleep data.\n\nTo test the association between average hours of sleep and BMI multiple linear regression analyses were conducted using the ‘statsmodels’ package in python.\n\n\nResults\n\nUseable data was available for 212 individuals; the summary of their clinical and demographic characteristics are shown in Table 1.\n\nBMI: Body mass index\n\nA linear regression analysis showed that after adjusting for age, gender, race, and average number of steps taken per day, average hours of sleep per day was negatively associated with BMI (p=0.02): an hour increase in sleep per day was associated with approximately a 0.5 point decrease in BMI (Table 2, Figure 1).\n\n\nConclusions\n\nIn summary, we found that the findings of Xu and colleagues are consistent in a population of different ancestry. More generally, previous work2,3 and that described here demonstrates the significant potential for wearables in global biomedical research and further, as we used openly available data, this analysis shows the benefits of sharing observational data4.\n\n\nData availability\n\nAll data used in this study is available from the article by Lim et al. https://doi.org/10.1371/journal.pbio.20042853",
"appendix": "Competing interests\n\n\n\nLM, FM and SR are employees of Bristol-Myers Squibb Company.\n\n\nGrant information\n\nBristol-Myers Squibb supported this work.\n\n\nReferences\n\nCappuccio FP, Taggart FM, Kandala NB, et al.: Meta-analysis of short sleep duration and obesity in children and adults. Sleep. 2008; 31(5): 619–626. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXu X, Conomos MP, Manor O, et al.: Habitual sleep duration and sleep duration variation are independently associated with body mass index. Int J Obes (Lond). 2017. PubMed Abstract | Publisher Full Text\n\nLim WK, Davila S, Teo JX, et al.: Beyond fitness tracking: The use of consumer-grade wearable data from normal volunteers in cardiovascular and lipidomics research. PLoS Biol. 2018; 16(2): e2004285. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcDonald L, Schultze A, Simpson A, et al.: Lack of data sharing in observational studies. BMJ. 2017; 359: j4866. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "35150",
"date": "16 Aug 2018",
"name": "Eva Corpeleijn",
"expertise": [
"Reviewer Expertise Lifestyle epidemiology",
"lifestyle interventions to prevent diabetes type 2",
"wearable technology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper confirms a weak inverse association between sleep time and BMI using a mainstream consumer activity tracker (fitbit). The aim is to demonstrate the potential for wearables in scientific research.\nBecause of this aim, it would be helpful to get additional information about feasibility aspects: what are the prerequisitions for usability in terms of data collection, how many of the participants had useful data based on which criteria, what strategies are needed for quality control to obtain meaningful associations?\nDefinitions for 'useable data' should therefore be clarified.\nSensitivity analyses can provide answers to what elements are important and what factors are secondary for a meaningful data use of consumer trackers.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "37363",
"date": "03 Sep 2018",
"name": "Maria R. Bonsignore",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI agree with the previous reviewer about methodological remarks. While wearable devices may help in collecting data and significantly contribute to generate hypotheses or confirm results, their reliability has not been rigorously tested. An advantage of wearable devices is the possibility to collect large amount of data, which is not the case with this paper (n=212). Nevertheless, this work points to the possibility of increasingly available \"big data\", especially after appropriate validation studies.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-511
|
https://f1000research.com/articles/7-1428/v1
|
07 Sep 18
|
{
"type": "Research Article",
"title": "Reduced-dose computed tomography to detect dorsal screw protrusion after distal radius volar plating",
"authors": [
"Kevin J. Leffers",
"John W. Kosty",
"Glenn M. Garcia",
"Daniel C Jupiter",
"Ronald W. Lindsey",
"Zbigniew Gugala",
"Kevin J. Leffers",
"John W. Kosty",
"Glenn M. Garcia",
"Daniel C Jupiter",
"Ronald W. Lindsey"
],
"abstract": "Background: Tenosynovitis and tendon rupture caused by screw penetration of the dorsal cortex are common complications after fixed-angle volar plating of a distal radius fracture. Detecting screw prominence with plain radiography is difficult due to the topography of the distal radius dorsal cortex. Computed tomography (CT) offers more detailed imaging of the bone topography, but is associated with radiation exposure. The present cadaveric study compared reduced-dose and standard-dose CT protocols in the detection of dorsal screw protrusion after fixed-angle volar plating of distal radius fracture. If found equivalent, a reduced-dose protocol could decrease the total radiation exposure to patients. Methods: Standard size distal radius volar locking plates were placed using a standard Henry approach in 3 matched pairs of cadaver wrists. A total of 3 distal locking screws were placed at 3 different lengths for a total of 3 rounds of CT scans per wrist pair. Each wrist pair was imaged by CT using standard-dose and reduced-dose protocols. Dorsal screw penetration was measured in each imaging protocol by 3 radiologists at two time periods to calculate inter- and intra-observer variability. Variability was calculated using the concordance correlation coefficient (CCC), intra-class correlation coefficient (ICC), and Pearson correlation coefficient (PCC). Bland-Altman plots were used and assessed 95% limits of agreement. Results: Intra- and inter-observer variabilities, either with the reduced-dose or standard-dose protocol, were >0.85. Pairwise CCC, ICC, and PCC were >0.91. In the comparison of reduced dose versus standard dose between radiologists, correlations were always >0.95. Conclusions: Comparison of a reduced-dose CT protocol and a standard-dose CT protocol for the detection of dorsal penetrating screws after fixed-angle volar plating showed >0.95 correlation in this cadaveric model. A reduced-dose CT protocol is equivalent to a standard dose CT protocol for orthopedic imaging and should reduce radiation exposure.",
"keywords": [
"reduced-dose computed tomography",
"distal radius fracture",
"radiation exposure",
"volar plating"
],
"content": "Introduction\n\nDistal radius fractures are the most prevalent bony injury in the upper extremity, accounting for 17.5% of all fractures encountered by orthopedic trauma surgeons1. Fixed-angle volar plating is the surgical method most frequently used for the internal fixation of these fractures. However, tenosynovitis due to extensor tendon irritation and tendon rupture are common complications of volar plate fixation when the posteriorly directed screws protrude through the dorsal cortex. Among 114 patients followed up for at least 1 year, prominent dorsal screw tips accounted for over half of the complications associated with volar plate fixation of unstable distal radius fractures2.\n\nAlthough most surgeons routinely use intraoperative radiography to assess the adequacy of volar plate and screw placement, accurately determining the presence of dorsal cortex screw protrusion by plain radiography is extremely difficult because of the triangular shape of the distal radius. In the assessment of dorsal screw protrusion in cadaveric distal radii by true lateral radiographs, the sensitivity varied between 56–75% among hand surgeons depending on their years of experience3. In another cadaveric study, Maschke et al. assessed the sensitivity of oblique pronation and supination imaging views and found that although these angled images were more sensitive than the true lateral view, 2–3 mm of dorsal screw protrusion could still go undetected4.\n\nComputed tomography (CT) is frequently used to evaluate the extent of volar-plate dorsal screw protrusion in distal radius fractures in symptomatic patients, and has proven to be more sensitive than plain radiography in this application5. However, concerns exist because CT requires a significant increase in radiation exposure and all of its associated risks6.\n\nRecently, several studies have suggested that CT at reduced doses has merit in the accurate assessment of a variety of non-orthopedic and orthopedic medical conditions7–11. However, the efficacy of reduced-dose CT for the accurate imaging of dorsal screw protrusion in the distal radius has not been determined, and gauging it was the objective of this study. Our hypothesis was that the accuracy of distal radius dorsal screw protrusion detection would not differ between reduced-dose and standard-dose CT protocols.\n\n\nMethods\n\nThree matched pairs of fresh-frozen cadaver wrists (United Tissue Network; Norman, OK, USA) were grossly screened before and during dissection to exclude the presence of prior pathology, trauma, and/or deformity. The cadaver work was performed in accordance with UTMB policies and regulation regarding procuring and handling cadavers (UTMB Notification of Use 04272015). In each wrist, a modified Henry approach was performed to access the volar surface of the distal radius, where a standard 3-hole distal radius volar locking plate (Biomet, Warsaw, IN, USA) was applied just proximal to the watershed area and centered on the radial shaft. Proximally, a volar plate diaphyseal screw was placed in routine fashion to secure this standard longitudinal position, which remained constant throughout the study. The distal locking holes were drilled using the locking drill guide and measured with a depth gauge. The closest length of screw (15mm) that would be short of this measurement and not penetrate the dorsal cortex was used. A total of 3 short locking screws were placed: radial, middle, and ulnar. Thereafter, the wrist was pronated to permit a surgical exposure incision at the dorsal distal radius, allowing direct visualization of each screw hole. After the absence of screw protrusions was documented, the incision was closed with Vicryl® suture (Ethicon; Bridgewater, NJ, USA).\n\nEach wrist pair was imaged 3 times using a CT scanner (SOMATOM® Definition Flash; Siemens Healthcare, Erlangen, Germany) and following a reduced-dose protocol and a standard-dose protocol. The first evaluation followed the placement of the non-penetrating screws and the suturing of the incision. After our musculoskeletal radiologist confirmed the quality of the images, the short distal screws were exchanged for screws 2.0 mm or longer to breach the dorsal cortex. The sutured dorsal incision was opened and the extent of distal screw dorsal penetration was measured with a ruler and recorded (Figure 1). The skin was re-approximated and the specimens were subsequently imaged again using the standard-dose and reduced-dose protocols. Thereafter, these longer screws were exchanged for screws 2.0 mm longer, and all specimens were subjected to a third evaluation by CT.\n\nThe standard-dose protocol utilized a fixed-tube current of 120 mA and a voltage of 120 kV. For the reduced-dose protocol, Siemens’s Combined Applications to Reduce Exposure (X-CARE) software was employed. This dose-reduction software automatically modulates the tube current according to the specimen’s anatomy and position during the CT scan. The adjusted mA values for the reduced-dose protocol ranged from 69–115 mA (Table 1); the overall average was 98 mA with a standard deviation of 13. As in the standard-dose protocol, the voltage was a constant 120 kV.\n\nFollowing CT imaging, the extent of dorsal screw penetration was measured in all 3 screw groups by 3 radiologists (senior radiologist [GMG], senior radiology resident, and junior radiology resident) at 2 time points to permit the assessment of inter- and intra-observer variabilities. The radiologists measured the maximal cortical extrusion of each screw from the level of the cortical breach to the screw tip (Figure 2) by utilizing the ruler caliper of the OsiriX DICOM imaging software v.6.5.2 (Apple Computers, Cupertino, CA, USA).\n\nInitial data analysis consisted of verifying the radiologists’ assessment repeatability. The first and second assessments for each observer were compared for all data using the concordance correlation coefficient (CCC), intra-class correlation coefficient (ICC), and Pearson correlation coefficient, as well as Bland-Altman plots. After repeatability was established, the average of each paired set of measurements was determined, and that value was used for all subsequent analyses. Similar analyses were done to examine the inter-observer agreement of the reduced-dose CT reads. Finally, the previous analyses were repeated to compare the reduced-dose and standard-dose protocol readings of each radiologist. All statistical analyses were performed using R statistical software package (version 3.5.1; The R Foundation for Statistical Computing; Vienna, Austria).\n\n\nResults\n\nWhen all measurements were examined for either reduced dose or standard dose, the CCC, ICC, and Pearson correlation were all >0.96 (0.96–0.99) for raters 1 and 2. The correlations for rater 3 ranged from 0.86–0.96. The limits of agreement for the first 2 radiologists were 0.55–0.71. The limits were wider, 1.09–1.45, for the third radiologist.\n\nThe inter-observer agreement patterns were similar to those of repeatability. Three-way ICC ranged from 0.93–0.96. Pairwise ICC, CCC, and Pearson correlation were high, >0.91. Similar to reliability, the limits of agreement ranged from 0.72–1.27.\n\nIn comparing the reduced-dose and standard-dose protocol readings within radiologists, correlations were very high, always >0.99. The limits of agreement ranged from 0.44–0.56 (Figure 3–Figure 5).\n\n\nDiscussion\n\nAn accurate assessment of distal screw placement during volar-plate fracture fixation can be clinically challenging, both intra- and postoperatively, with or without conventional radiography3,4. This study demonstrated that a reduced-dose CT protocol is equivalent to a standard-dose CT protocol, with correlations >0.99, in the detection of dorsal screw protrusion after fixed-angle volar plating of distal radius fracture. In intra- and inter-observer variability, the radiologists’ assessments demonstrated good agreement throughout the study. Moreover, the reduced-dose CT protocol was able to maintain a current below the standard 120 mA, with an average in-scan value as low as 80 mA (Table 1)—a 33% reduction that could potentially significantly decrease a patient’s overall radiation exposure. Any technique that can consistently decrease radiation exposure without compromising its diagnostic utility should be viewed favorably.\n\nThe clinical relevance of this study is considerable. As noted above, volar-plate fixation of distal radius fractures is a common surgical procedure and the topography of the dorsal cortex makes the plain radiography detection of screw prominence difficult. Prominent screws, if undetected, pose a great risk for postoperative morbidity that can include tendon irritation, tendon rupture, and/or the need for additional surgery. Although conventional CT detection of dorsally prominent screws certainly provides greater sensitivity, the elevated radiation exposure associated with the approach is a major concern. The dose-reduction software employed in this study is used for patients, so it is applicable to clinical practice.\n\nThe limitations of this study include the variability of cadaveric specimens and the variability of the reduced-dose radiation utilized. A cadaveric specimen may not fully reflect all of the issues associated with soft tissues in vivo. For example, the variations in bone, periosteum, and other soft tissues surrounding the dorsal cortex may affect the accuracy of the screw-tip assessment. Additionally, there were no fracture fragments or callus or other soft tissue reactions typically associated with distal radius fractures. However, since both the reduced-dose and standard-dose protocols were applied in cadaveric specimens, we anticipate that their equivalence would be maintained in vivo. Studies with living patients are needed to confirm this study’s findings.\n\nWe recommend that if dorsal screw penetration is a concern, clinicians should consider a reduced-dose CT protocol to assess screw penetration of the dorsal cortex in patients with clinical presentations that warrant enhanced imaging.\n\n\nData availability\n\nDataset 1: Computed tomography (CT) reading results and anatomic measurement 10.5256/f1000research.15056.d21389312\n\nContent and images used in this paper have previous been published by the authors as part of a poster for the Orthopaedic Research Society annual meeting, 2016 (poster available here).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors thank Jorge A. Lee Diaz, MD and Matthew G. Ditzler, MD, of the Department of Radiology; Stephen Dryden, BS, of the School of Medicine; and Randal P. Morris, BS, of the Department of Orthopaedic Surgery and Rehabilitation, all at the University of Texas Medical Branch, for their invaluable assistance with this research.\n\n\nReferences\n\nCourt-Brown CM, Caesar B: Epidemiology of adult fractures: A review. Injury. 2006; 37(8): 691–697. PubMed Abstract | Publisher Full Text\n\nArora R, Lutz M, Hennerbichler A, et al.: Complications following internal fixation of unstable distal radius fracture with a palmar locking-plate. J Orthop Trauma. 2007; 21(5): 316–322. PubMed Abstract | Publisher Full Text\n\nThomas AD, Greenberg JA: Use of fluoroscopy in determining screw overshoot in the dorsal distal radius: a cadaveric study. J Hand Surg Am. 2009; 34(2): 258–261. PubMed Abstract | Publisher Full Text\n\nMaschke SD, Evans PJ, Schub D, et al.: Radiographic evaluation of dorsal screw penetration after volar fixed-angle plating of the distal radius: a cadaveric study. Hand (N Y). 2007; 2(3): 144–150. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTakemoto RC, Gage M, Rybak L, et al.: Accuracy of detecting screw penetration of the radiocarpal joint following volar plating using plain radiographs versus computed tomography. Am J Orthop (Belle Mead NJ). 2012; 41(8): 358–361. PubMed Abstract\n\nGriffey RT, Sodickson A: Cumulative radiation exposure and cancer risk estimates in emergency department patients undergoing repeat or multiple CT. AJR Am J Roentgenol. 2009; 192(4): 887–892. PubMed Abstract | Publisher Full Text\n\nRassweiler MC, Banckwitz R, Koehler C, et al.: New developed urological protocols for the Uro Dyna-CT reduce radiation exposure of endourological patients below the levels of the low dose standard CT scans. World J Urol. 2014; 32(5): 1213–1218. PubMed Abstract | Publisher Full Text\n\nHoxworth JM, Lal D, Fletcher GP, et al.: Radiation dose reduction in paranasal sinus CT using model-based iterative reconstruction. AJNR Am J Neuroradiol. 2014; 35(4): 644–649. PubMed Abstract | Publisher Full Text\n\nKonda SR, Howard DO, Gyftopoulos S, et al.: Computed tomography scan to detect intra-articular air in the knee joint: a cadaver study to define a low radiation dose imaging protocol. J Orthop Trauma. 2013; 27(9): 505–508. PubMed Abstract | Publisher Full Text\n\nFox AM, Kedgley AE, Lalone EA, et al.: The effect of decreasing computed tomography dosage on radiostereometric analysis (RSA) accuracy at the glenohumeral joint. J Biomech. 2011; 44(16): 2847–2850. PubMed Abstract | Publisher Full Text\n\nAbul-Kasim K, Overgaard A, Maly P, et al.: Low-dose helical computed tomography (CT) in the perioperative workup of adolescent idiopathic scoliosis. Eur Radiol. 2009; 19(3): 610–618. PubMed Abstract | Publisher Full Text\n\nLeffers KJ, Kosty JW, Garcia GM, et al.: Dataset 1 in: Reduced-dose computed tomography to detect dorsal screw protrusion after distal radius volar plating. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15056.d213893"
}
|
[
{
"id": "38078",
"date": "10 Oct 2018",
"name": "Jesse Bernard Jupiter",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well designed study to evaluate the accuracy of low dose CT scan in detecting screw penetration of the dorsal cortex of the distal radius when compared to high dose CT. Inter and intraobserver validation was used. The methodology, statistical analysis, and conclusions were sound.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1428
|
https://f1000research.com/articles/7-1424/v1
|
07 Sep 18
|
{
"type": "Software Tool Article",
"title": "methyvim: Targeted, robust, and model-free differential methylation analysis in R",
"authors": [
"Nima S. Hejazi",
"Rachael V. Phillips",
"Alan E. Hubbard",
"Mark J. van der Laan",
"Rachael V. Phillips",
"Alan E. Hubbard",
"Mark J. van der Laan"
],
"abstract": "We present methyvim, an R package implementing an algorithm for the nonparametric estimation of the effects of exposures on DNA methylation at CpG sites throughout the genome, complete with straightforward statistical inference for such estimates. The approach leverages variable importance measures derived from statistical parameters arising in causal inference, defined in such a manner that they may be used to obtain targeted estimates of the relative importance of individual CpG sites with respect to a binary treatment assigned at the phenotype level, thereby providing a new approach to identifying differentially methylated positions. The procedure implemented is computationally efficient, incorporating a preliminary screening step to isolate a subset of sites for which there is cursory evidence of differential methylation as well as a unique multiple testing correction to control the False Discovery Rate with the same rigor as would be available if all sites were subjected to testing. This novel technique for analysis of differentially methylated positions provides an avenue for incorporating flexible state-of-the-art data-adaptive regression procedures (i.e., machine learning) into the estimation of differential methylation effects without the loss of interpretable statistical inference for the estimated quantity.",
"keywords": [
"DNA methylation",
"differential methylation",
"epigenetics",
"causal inference",
"variable importance",
"machine learning",
"targeted loss-based estimation"
],
"content": "Introduction\n\nDNA methylation is a fundamental epigenetic process known to play an important role in the regulation of gene expression. DNA methylation most commonly occurs at CpG sites and involves the addition of a methyl group (CH3) to the fifth carbon of the cytosine ring structure to form 5-methylcytosine. Numerous biological and medical studies have implicated DNA methylation as playing a role in disease and development1. Perhaps unsurprisingly then, biotechnologies have been developed to rigorously probe the molecular mechanisms of this epigenetic process. Modern assays, like the Illumina Infinium HumanMethylation BeadChip assay, allow for quantitative interrogation of DNA methylation, at single-nucleotide resolution, across a comprehensive set of CpG sites scattered across the genome; moreover, the computational biology community has invested significant effort in the development of tools for properly removing technological effects that may contaminate biological signatures measured by such assays [2, Dedeurwaerder et al.3]. Despite these advances in both biological and bioninformatical techniques, most statistical methods available for differential analysis of data produced by such assays rely on over-simplified models that do not readily extend to such high-dimensional data structures without restrictive modeling assumptions and the use of inferentially costly hypothesis testing corrections. When these standard assumptions are violated, estimates of the population-level effect of an exposure or treatment may suffer from large bias. What’s more, reliance on restrictive and misspecified statistical models naturally leads to biased effect estimates that are not only misleading in assessing effect sizes but also result in false discoveries as these biased estimates are subject to testing and inferential procedures. Such predictably unreliable methods serve only to produce findings that are later invalidated by replication studies and add still further complexity to discovering biological targets for potential therapeutics. Data-adaptive estimation procedures that utilize machine learning provide a way to overcome many of the problems common in classical methods, controlling for potential confounding even in high-dimensional settings; however, interpretable statistical inference (i.e., confidence intervals and hypothesis tests) from such data-adaptive estimates is challenging to obtain4.\n\nIn this paper, we briefly present an alternative to such statistical analysis approaches in the form of a nonparametric estimation procedure that provides simple and readily interpretable statistical inference, discussing at length a recent implementation of the methodology in the methyvim R package. Inspired by recent advances in statistical causal inference and machine learning, we provide a computationally efficient technique for obtaining targeted estimates of nonparametric variable importance measures (VIMs)5, estimated at a set of pre-screened CpG sites, controlling for the False Discovery Rate (FDR) as if all sites were tested. Under standard assumptions (e.g., identifiability, strong ignorability)6, targeted minimum loss-based estimators of regular asymptotically linear estimators have sampling distributions that are asymptotically normal, allowing for reliable point estimation and the construction of Wald-style confidence intervals [7, van der Laan and Rose8]. In the context of DNA methylation studies, we define the counterfactual outcomes under a binary treatment as the observed methylation (whether Beta- or M-) values a CpG site would have if all subjects were administered the treatment and the methylation values a CpG site would have if treatment were withheld from all subjects. Although these counterfactual outcomes are, of course, impossible to observe, they do have statistical analogs that may be reliably estimated (i.e., identified) from observed data under a small number of untestable assumptions6. We describe an algorithm that incorporates, in its final step, the use targeted minimum loss-based estimators (TMLE)9 of a given VIM of interest, though we defer rigorous and detailed descriptions of this aspect of the statistical methodology to work outside the scope of the present manuscript [9, van der Laan and Rose7, van der Laan and Rose8]. The proposed methodology assesses the individual importance of a given CpG site, as a proposed measure of differential methylation, by utilizing state-of-the-art machine learning algorithms in deriving targeted estimates and robust inference of a VIM, as considered more broadly for biomarkers in Bembom et al.10 and Tuglus and van der Laan11. In the present work, we focus on the methyvim software package, available through the Bioconductor project [12, Huber et al.13] for the R language and environment for statistical computing14, which implements a particular realization of this methodology specifically tailored for the analysis and identification of differentially methylated positions (DMPs).\n\nFor an extended discussion of the general framework of targeted minimum loss-based estimation and detailed accounts of how this approach may be brought to bear in developing answers to complex scientific problems through statistical and causal inference, the interested reader is invited to consult van der Laan and Rose7 and van der Laan and Rose8. For a more general introduction to causal inference, Pearl6 and Hernan and Robins15 may be of interest.\n\n\nMethods\n\nThe core functionality of this package is made available via the eponymous methyvim function, which implements a statistical algorithm designed to compute targeted estimates of VIMs, defined in such a way that the VIMs represent parameters of scientific interest in computational biology experiments; moreover, these VIMs are defined such that they may be estimated in a manner that is very nearly assumption-free, that is, within a fully nonparametric statistical model. The statistical algorithm consists of several major steps summarized below. Additional methodological details on the use of targeted minimum loss-based estimation in this problem setting is provided in Supplementary File 1.\n\n1. Pre-screening of genomic sites is used to isolate a subset of sites for which there is cursory evidence of differential methylation. Currently, the available screening approach adapts core routines from the limma R package. Following the style of the function for performing screening via limma, users may write their own screening functions and are invited to contribute such functions to the core software package by opening pull requests at the GitHub repository: https://github.com/nhejazi/methyvim.\n\n2. Nonparametric estimates of VIMs, for the specified target parameter, are computed at each of the CpG sites passing the screening step. The VIMs are defined in such a way that the estimated effects is of an binary treatment on the methylation status of a target CpG site, controlling for the observed methylation status of the neighbors of that site. Currently, routines are adapted from the tmle R package.\n\n3. Since pre-screening is performed prior to estimating VIMs, we apply the modified marginal Benjamini and Hochberg step-up False Discovery Rate controlling procedure for multi-stage analyses (FDR-MSA), which is well-suited for avoiding false positive discoveries when testing is only performed on a subset of potential targets.\n\nParameters of Interest For CpG sites that pass the pre-screening step, a user-specified target parameter of interest is estimated independently at each site. In all cases, an estimator of the parameter of interest is constructed via targeted minimum loss-based estimation.\n\nTwo popular target causal parameters for discrete-valued treatments or exposures are\n\nThe average treatment effect (ATE): The effect of a binary exposure or treatment on the observed methylation at a target CpG site is estimated, controlling for the observed methylation at all other CpG sites in the same neighborhood as the target site, based on an additive form. Often denoted ψ0=ψ0(1)−ψ0(0), the parameter estimate represents the additive difference in methylation that would have been observed at the target site had all observations received the treatment versus the counterfactual under which none received the treatment.\n\nThe relative risk (RR): The effect of a binary exposure or treatment on the observed methylation at a target CpG site is estimated, controlling for the observed methylation at all other CpG sites in the same neighborhood as the target site, based on a geometric form. Often denoted, ψ0=ψ0(1)ψ0(0), the parameter estimate represents the multiplicative difference in methylation that would have been observed at the target site had all observations received the treatment versus the counterfactual under which none received the treatment.\n\nEstimating the VIM corresponding to the parameters above, for discrete-valued treatments or exposures, requires two separate regression steps: one for the treatment mechanism (propensity score) and one for the outcome regression. Technical details on the nature of these regressions are discussed in Hernan and Robins15, and details for estimating these regressions in the framework of targeted minimum loss-based estimation are discussed in van der Laan and Rose7.\n\nClass methytmle We have adopted a class methytmle to help organize the functionality within this package. The methytmle class builds upon the GenomicRatioSet class provided by the minfi package so all of the slots of GenomicRatioSet are contained in a methytmle object. The new class introduced in the methyvim package includes several new slots:\n\ncall - the form of the original call to the methyvim function.\n\nscreen_ind - indices identifying CpG sites that pass the screening process.\n\nclusters - non-unique IDs corresponding to the manner in wich sites are treated as neighbors. These are assigned by genomic distance (bp) and respect chromosome boundaries (produced via a call to bumphunter::clusterMaker).\n\nvar_int - the treatment/exposure status for each subject. Currently, these must be binary, due to the definition of the supported targeted parameters.\n\nparam - the name of the target parameter from which the estimated VIMs are defined.\n\nvim - a table of statistical results obtained from estimating VIMs for each of the CpG sites that pass the screening procedure.\n\nic - the measured array values for each of the CpG sites passing the screening, transformed into influence curve space based on the chosen target parameter.\n\nThe show method of the methytmle class summarizes a selection of the above information for the user while masking some of the wealth of information given when calling the same method for GenomicRatioSet. All information contained in GenomicRatioSet objects is preserved in methytmle objects, so as to ease interoperability with other differential methylation software for experienced users. We refer the reader to the package vignette, “methyvim: Targeted Data-Adaptive Estimation and Inference for Differential Methylation Analysis,” included in any distribution of the software package, for further details.\n\nA standard computer with the latest version of R and Bioconductor 3.6 installed will handle applications of the methyvim package.\n\n\nUse cases\n\nTo examine the practical applications and the full set of utilities of the methyvim package, we will use a publicly available example data set produced by the Illumina 450K array, from the minfiData R package, accessible via the Bioconductor project at https://doi.org/doi:10.18129/B9.bioc.minfiData.\n\nPreliminaries: Setting up the data We begin by loading the package and the data set. After loading the data, which comes in the form of a raw MethylSet object, we perform some further processing by mapping to the genome (with mapToGenome) and converting the values from the methylated and unmethylated channels to Beta-values (via ratioConvert). These two steps together produce an object of class GenomicRatioSet, provided by the minfi package.\n\n\n\n\n\nWe can create an object of class methytmle from any GenomicRatioSet object simply invoking the S4 class constructor .methytmle:\n\n\n\n\n\n\n\n\n\nAdditionally, a GenomicRatioSet can be created from a matrix with the function makeGenomicRatioSetFromMatrix provided by the minfi package.\n\nDifferential Methylation Analysis For this example analysis, we’ll treat the condition of the patients as the exposure/treatment variable of interest. The methyvim function requires that this variable either be numeric or easily coercible to numeric. To facilitate this, we’ll simply convert the covariate (currently a character):\n\n\n\nn.b., the re-coding process results in “normal” patients being assigned a value of 1 and cancer patients a 0.\n\nNow, we are ready to analyze the effects of cancer status on DNA methylation using this data set. We proceed as follows with a targeted minimum loss-based estimate of the Average Treatment Effect.\n\n\n\n\n\nNote that we set the obs_per_covar argument to a relatively low value (just 2, even though the recommended value, and default, is 20) for the purposes of this example as the sample size is only 10. We do this only to exemplify the estimation procedure and it is important to point out that such low values for obs_per_covar will compromise the quality of inference obtained because this setting directly affects the definition of the target parameter.\n\nFurther, note that here we apply the glm flavor of the tmle_type argument, which produces faster results by fitting models for the propensity score and outcome regressions using a limited number of parametric models. By contrast, the sl (for “Super Learning”) flavor fits these two regressions using highly nonparametric and data-adaptive procedures (i.e., via machine learning). Obtaining the estimates via GLMs results in each of the regression steps being less robust than if nonparametric regressions were used.\n\nWe can view a table of results by examining the vim slot of the produced object, most easily displayed by simply printing the resultant object:\n\n\n\n\n\nFinally, we may compute FDR-corrected p-values, by applying a modified procedure for controlling the False Discovery Rate for multi-stage analyses (FDR-MSA)16. We do this by simply applying the fdr_msa function.\n\n\n\nHaving explored the results of our analysis numerically, we now proceed to use the visualization tools provided with the methyvim R package to further enhance our understanding of the results.\n\nVisualization of results While making allowance for users to explore the full set of results produced by the estimation procedure (by way of exposing these directly to the user), the methyvim package also provides three (3) visualization utilities that produce plots commonly used in examining the results of differential methylation analyses.\n\nA simple call to plot produces side-by-side histograms of the raw p-values computed as part of the estimation process and the corrected p-values obtained from using the FDR-MSA procedure.\n\n\n\n\n\nRemark: The plots displayed above may also be generated as side-by-side histograms in a single plot object. This is the default for the plot method and may easily be invoked by specifying no additional arguments to the plot function, unlike in the above.\n\nFrom the code snippets displayed above, Figure 1 displays a histogram of raw (or uncorrected) p-values from hypothesis testing of the statistical parameter corresponding to the average treatment effect while Figure 2 produces a histogram of p-values from the same set of hypothesis tests, correcting for multiple testing using the FDR-MSA method16. While histograms of the p-values may be generally useful in inspecting the results of the estimation procedure, a more common plot used in examining the results of differential methylation procedures is the volcano plot, which plots the parameter estimate along the x-axis and −log10(p-value) along the y-axis. We implement such a plot in the methyvolc function:\n\n\n\nFigure 3 above displays a volcano plot of the raw (or unadjusted) p-values against estimates of the effect of interest (by default, the average treatment effect in methyvim). The purpose of such a plot is to ensure that very low (possibly statistically significant) p-values do not arise from cases of low variance. This appears to be the case in the plot above (notice that most parameter estimates are near zero, even in cases where the raw p-values are quite low).\n\nYet another popular plot for visualizing effects in such settings is the heatmap, which plots estimates of the raw methylation effects (as measured by the assay) across subjects using a heat gradient. We implement this in the methyheat function:\n\n\n\nRemark: Figure 4 displays the results of invoking methyheat in this manner produces a plot of the top sites (25, by default) based on the raw p-value, using the raw methylation measures in the plot. This uses the exceptional superheat R package17, to which we can easily pass additional parameters. In particular, we hide the CpG site labels that would appear by default on the left of the heatmap (by setting left.label = \"none\") to emphasize that this is only an example and not a scientific discovery.\n\n\nSummary\n\nHere we introduce the R package methyvim, an implementation of a general algorithm for differential methylation analysis that allows for recent advances in causal inference and machine learning to be leveraged in computational biology settings. The estimation procedure produces straightforward statistical inference and takes great care to ensure computationally efficiency of the technique for obtaining targeted estimates of nonparametric variable importance measures. A detailed account of the statistical procedure, including an overview of targeted minimum loss-based estimation, is made available in Supplementary File 1. The software package includes techniques for pre-screening a set of CpG sites, controlling for the False Discovery Rate as if all sites were tested, and for visualzing the results of the analyses in a variety of ways. The anatomy of the software package is dissected and the design described in detail. The methyvim R package is available via the Bioconductor project.\n\n\nSoftware availability\n\nmethyvim is avaiable on Bioconductor (stable release): https://bioconductor.org/packages/methyvim\n\nLatest source code (development version): https://github.com/nhejazi/methyvim\n\nArchived source code as at time of publication: https://dx.doi.org/10.5281/zenodo.140129818\n\nDocumentation (development version): https://code.nimahejazi.org/methyvim\n\nSoftware license: The MIT License, copyright Nima S. Hejazi",
"appendix": "Grant information\n\nNH was supported in part by the National Library of Medicine of the National Institutes of Health under Award Number T32-LM012417, by P42-ES004705, and by R01-ES021369. RP was supported by P42-ES004705. The content of this work is solely the responsibility of the authors and does not necessarily represent the official views of the various funding sources and agencies.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary File 1: Statistical procedure for identifying differentially methylated positions.\n\nClick here to access the data.\n\n\nReferences\n\nRobertson KD: DNA methylation and human disease. Nat Rev Genet. 2005; 6(8): 597–610. PubMed Abstract | Publisher Full Text\n\nFortin JP, Labbe A, Lemire M, et al.: Functional normalization of 450k methylation array data improves replication in large cancer studies. bioRxiv. 2014. Publisher Full Text\n\nDedeurwaerder S, Defrance M, Bizet M, et al.: A comprehensive overview of Infinium HumanMethylation450 data processing. Brief Bioinform. 2014; 15(6): 929–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLibbrecht MW, Noble WS: Machine learning applications in genetics and genomics. Nat Rev Genet. 2015; 16(6): 321–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan der Laan MJ: Statistical inference for variable importance. Int J Biostat. 2006; 2(1). Publisher Full Text\n\nPearl J: Causality: Models, Reasoning, and Inference. Cambridge University Press, 2009. Reference Source\n\nvan der Laan MJ, Rose S: Targeted Learning: Causal Inference for Observational and Experimental Data. Springer Science & Business Media, 2011. Publisher Full Text\n\nvan der Laan MJ, Rose S: Targeted Learning in Data Science: Causal Inference for Complex Longitudinal Studies. Springer Science & Business Media, 2018. Publisher Full Text\n\nvan der Laan MJ, Rubin D: Targeted maximum likelihood learning. Int J Biostat. 2006; 2(1). Publisher Full Text\n\nBembom O, Petersen ML, Rhee SY, et al.: Biomarker discovery using targeted maximum-likelihood estimation: application to the treatment of antiretroviral-resistant HIV infection. Stat Med. 2009; 28(1): 152–172. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTuglus C, van der Laan MJ: Targeted methods for biomarker discovery. In: Targeted Learning. Springer, 2011; 367–382. Publisher Full Text\n\nGentleman RC, Carey VJ, Bates DM, et al.: Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10): R80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2018. Reference Source\n\nHernan MA, Robins JM: Causal Inference. Chapman & Hall/CRC Texts in Statistical Science. Taylor & Francis, 2018, forthcoming. Reference Source\n\nTuglus C, van der Laan MJ: Modified FDR controlling procedure for multi-stage analyses. Stat Appl Genet Mol Biol. 2009; 8(1): 1–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarter RL, Yu B: Superheat: An R package for creating beautiful and extendable heatmaps for visualizing complex data.2017. Publisher Full Text\n\nHejazi N, Phillips R, vobencha, et al.: nhejazi/methyvim: methyvim: F1000Research Publication (Version f1000). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1401298"
}
|
[
{
"id": "38070",
"date": "25 Sep 2018",
"name": "Peter F. Hickey",
"expertise": [
"Reviewer Expertise DNA methylation analysis",
"statistics",
"bioinformatics",
"Bioconductor"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper presents the methyvim R/Bioconductor package for differential methylation analysis, a common task in the analysis of data from DNA methylation microarrays. The method has 3 main steps:\nA pre-screen of CpGs to identify putative differentially methylated CpGs. A secondary statistical procedure to obtain \"targeted\" estimates of nonparametric variable importance measures (corresponding to either the average treatment effect or relative risk) for these filtered CpGs. Adjustment of the results obtained in (2) using a modified Benjamini-Hochberg procedure for multi-stage analysis.\nI have 3 major concerns with the paper:\nThe statistical methods are almost certainly unfamiliar to potential users of the software and that the paper does not do enough to explain or justify the use of these procedures. The example dataset is not appropriate, nor is its analysis enlightening, to demonstrate to the interested reader where this method may be useful. The results of the method applied to the example dataset are not compared to existing methods. Furthermore, when graphically compared to a simple analysis, the results obtained by methyvim do not look like convincing differentially methylated CpGs.\nThe software itself appears to be well written and documented and is available as part of Bioconductor. Its development follows good practices for open-source R packages such as use of unit tests and continuous integration, integration with existing Bioconductor packages, and code available from open-source repository (Bioconductor git server and GitHub). Below, I have also included some minor suggestions with respect to the software.\nIn light of these 3 major concerns, each discussed further below, I find it very difficult to assess whether methyvim is software I would be interested in using or recommending to someone analysing DNA methylation data. Consequently, I cannot approve this article at this time.\nMain Concerns:\nLack of a simple explanation and justification for the statistical procedures\nI came to this paper being unfamiliar with a statistical technique central to this paper, namely, 'targeted minimum loss estimators' (TMLE). Unfortunately, after carefully reading the paper several times, it's still not clear to me the purported benefits and limitations of this method nor its appropriateness and utility for analysing DNA methylation data.\nAlthough there are several references to books and papers that cover the \"general framework of targeted minimum loss-based estimation and [detail] accounts of how this approach may be brought to bear in developing answers to complex scientific problems through statistical and causal inference\", there is no simple explanation of TMLE for the reader who might care about it solely in the context of the current paper: how it is being used to identify differentially methylated loci and how this differs from existing methods.\nChoice of example dataset and analysis\nPlease use a dataset that is suitable to demonstrate the utility and appropriateness of methyvim. The example dataset comes from the minfiData R/Bioconductor package and contains matched tumour-normal samples from 3 donors (n = 6, mistakenly referred to as n = 10 on p6). The authors admit that this is a small sample size for their method and that this \"compromises the quality of inference obtained\" by their method. Consequently, it seems unlikely that methyvim is going to produce new insights on this dataset nor will it exemplify the purported utility and appropriateness of the methods implemented in methyvim.\nLack of comparison to existing methods\nThe results obtained by methyvim need to be compared to those obtained by one of the existing tools (that are claimed to have poor performance for the types of problems methyvim seeks to address). In particular, as a reader, I was looking for the types of differentially methylated sites that this method detects that others might not and vice versa.\nTo satisfy my curiosity, I applied the very simple minfi::dmpFinder() to the example dataset and took the top-250 CpGs (the same number of CpGs as reported in by the example code in the paper). I then plotted methyvim's and minfi::dmpFinder()'s top-250 CpGs using minfi::plotCpg() to visually assess the quality of the differential methylation analysis. The top-250 CpGs from minfi::dmpFinder() look like real differentially methylated CpGs: large between-condition mean differences and small within-condition variances. In contrast, many of the top CpGs identified by methyvim do not look like real differentially methylated CpGs: small between-condition mean differences and/or large within-condition variances. This is exemplified by the top CpG called by methyvim (cg15703790, P = 6 x 10^-33, adjusted-P = 3 x 10^-27), which is not a called as a differentially methylated CpG by minfi::dmpFinder() (P = 0.11, Q = 0.26) and when plotted does not appear to be a real differentially methylated CpG. The code to run this comparison and the results figures are available in Result 1 (the R file containing code to generate Result 2 and Result 3), Result 2 (the methyvim output from Result 1) and Result 3 (the minfi output from Result 1) (produced using methyvim v1.3.1).\nMinor Suggestions:\nSome of these suggestions may be difficult to incorporate (even if desirable) while incorporating backwards compatibility.\nDesign of the methytmle class\nThe clusters slot could perhaps be a metadata column on the rowRanges slot, accessible via the rowData() getter/setter. That way when the object is subsetted the clusters would automatically get properly subsetted (currently the clusters slot doesn't behave when the object is subset with [). The screen_ind slot could also be a metadata column on the rowRanges slot but would need to be a TRUE/FALSE vector (rather than a numeric vector) with the same length as the number of rows of the object.\nFor both of the above, you could use how spike-in genes are handled in the SingleCellExperiment class for inspiration.\nThe var_int slot seems like it should just be part of the colData slot rather than its own slot. Again, this would ensure proper subsetting behaviour when the object is subset with [. The call, param and vim slots could perhaps be elements of the metadata slot. The vim slot could be a DataFrame rather than a data.frame. The main advantage is that then the show,methytmle-method wouldn't print out as much output as it currently does (the obvious alternative would be to alter the show() method to prevent so much output).\nConstructor function\n.methytmle(): A period at the start of a function name typically indicates that the function is for internal use and not exported; see https://bioconductor.org/packages/devel/bioc/vignettes/SummarizedExperiment/inst/doc/Extensions.html#defining-the-class-and-its-constructor\nPlots\n\nThe colour scale and legend on the histograms (Figures 1-2) and volcano plots (Figure 3) don't seem to add anything and are a bit distracting. Are they necessary?\n\nIs the rationale for developing the new software tool clearly explained? No\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? No\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? No",
"responses": [
{
"c_id": "4010",
"date": "26 Sep 2018",
"name": "Peter Hickey",
"role": "Reviewer Response",
"response": "It appears the filenames of the attachments were mangled upon upload.Result 1: The R file containing code to generate Result 2 and Result 3Result 2: The methyvim output from Result 1.Result 3: The minfi output from Result 1."
}
]
},
{
"id": "51717",
"date": "20 Aug 2019",
"name": "Jimmy Breen",
"expertise": [
"Reviewer Expertise The area of expertise of both reviewers is in genomics and epigenetics. We note that we are both only moderately skilled in statistical knowledge."
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper \"methyvim: Targeted, robust, and model-free differential methylation analysis in R\" authored by Hejazi NS, Phillips RV, Hubbard AE and van der Laan MJ, presents a model-free method for determining differential methylation in the R programming environment, leveraging methods used in machine learning research to determine variable importance. The paper topic is very interesting and potentially quite valuable to the epigenetics and bioinformatics communities, especially given the similarity of methods currently utilised for differential methylation analysis.\nThe manuscript is very good but we had a number of concerns, comments and suggestions regarding how the method is presented or can be improved:\nReadability:\nThe real value of F1000 papers is the freedom that the journal gives the author to extensively explain individual steps in an analysis. The way the manuscript is currently set out is fine for a standard paper, but it would be great if there were direct examples explained where concepts are introduced (i.e. in the \"implementation\" section), rather than having examples later in the \"Operation/Use Cases\" section. This would vastly simplify the text and enable each function and parameter to be comprehensively explained in the context of the method presented.\n\nAdditionally, the authors do a great job at giving background to variable importance measurements, however some of the explanations could be simplified for readability. Introducing background to a concept such as this can be challenging so the authors have done quite well at explaining this. I am definitely not an expert in this approach and even after re-reading the introduction it was difficult to understand without additional materials.\n\nOne such example was on page 3 of the manuscript: \"In the context of DNA methylation studies, we define the counterfactual outcomes under a binary treatment as the observed methylation (whether Beta- or M-) values a CpG site would have if all subjects were administered the treatment and the methylation values a CpG site would have if treatment were withheld from all subjects.\" This could easily be simplified, perhaps by providing simple examples or analogies to help the casual reader, many of which have only limited statistical knowledge.\n\nLack of performance comparisons:\nGiven the departure from the current consensus (perhaps perceived consensus) of using model-based statistical tests to determine differential methylation, a comparison of methyvim to other algorithms or methods would enable the reader to accurately gauge the differences between the approaches. For example, what are the differences between parametric and non-parametric approaches and are there any additional advantages from using this approach compared to `methylkit`?\n\nPre-screening:\nOverall there needs to be a great explanation regarding the pre-screening of genomic-sites. This would allow the reader to gauge the best parameters for this non-parametric approach. For example, the amount of CpG sites were reduced from 485,512 sites to under 500 sites based on Figure 1 (p-values histogram) and Figure 2. What was the number of the significant p-values on the first report of these public data (or if you run with any other existing package)? Will there be an issue with too much data trimming for the sake of having less computational demand?\n\nScalability:\nThe example described in this manuscript is from a 450K array, which makes for an easy example that is widely used in human epigenetics research. How does this approach scale to larger numbers of sites or samples? For example, if you used >1 million sites, do you get a comparable number of trimmed sites? Can this be implemented in whole genome bisulfite sequencing (WGBS) analyses, which is likely to have significantly increased numbers of sites?\n\nGiven the size of current DNA methylation studies, the human or other research areas, perhaps a larger dataset would be helpful to include.\n\nNon-CpG methylation contexts:\nHow would this `methyvim` package treat data for non-CpG contexts? For example, CHH methylation contexts have a methylation ratio which is much lower than the CpG context in humans, and therefore maybe difficult for the `methyvim` to filter. Illumina EPIC arrays (~850K) have a mixture of these sites on the array, so does that mixture create issues? Some level of work looking at that would be really important in this manuscript.\n\nOther comments:\nSome explanation about the results of the methyvim_cancer_ate would be nice (e.g. what was the maximum distance of two neighbouring CpG sites to be called neighbours? Are these results sorted by their coordinates (it might be easy to understand neighbouring effects if sorted)? Has pre-screening affected these max_cor_neighbors statistics?\n\nWhat would the sample classification look like using this non-parametric approach (e.g. PCA-plot)?\n\nIn general, a greater explanation of results from each function would be useful.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? No\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? No",
"responses": []
},
{
"id": "50781",
"date": "17 Sep 2019",
"name": "Nandita Mitra",
"expertise": [
"Reviewer Expertise Causal inference",
"statistical genetics",
"health policy and economics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Hejazi et al. describes a new R package, methyvim, which utilizes targeted loss-based estimation (TMLE) and causal inference approaches to estimate and conduct inference on the effects of exposures on DNA methylation. Overall, I found the machine learning/causal inference technical details to be sound and the arguments for using this TMLE based algorithm to be compelling; namely that it is a nonparametric, flexible approach that is computationally efficient and has asymptotic properties that allow straightforward point and interval estimation. The visualization tools are also very nice. However, for the reader who is unfamiliar with causal inference, generally, or TMLE, specifically, the manuscript can be a bit confusing. I also suggest that more guidance be provided to the reader on interpretation of results rather than focusing purely on what is outputted by the software. Some of the paper resembles software documentation (a manual on how to implement) rather than a traditional manuscript that describes a new method with intuition. A few comments follow that I hope the authors will find helpful in revising their manuscript:\n\nIt would be helpful to have more description on variable importance measures (VIMs). VIMs are not commonly used measures by statistical geneticists or other researchers; however, they are integral to understanding the approach underlying methyvim. VIMs are described only very briefly in the Introduction.\n\nThe Methods section is a bit choppy and could use better organization. Perhaps there could first be a section laying out the causal question and statistical approach, followed by implementation and operation.\n\nFor those not familiar with causal inference terms, the definition of the ATE could be simplified by simply using expectations rather than psi.\n\nImportantly, it would be very helpful to provide comparisons of methyvim to other commonly used methods and software used for DNA methylation studies. This would provide more compelling evidence for why researches should use methyvim. Providing both a comparison of outputs and interpretation of results across methods would be useful for users.\n\nReview of Software Implementation:\n\nThe package methyvimData is required to run the code example in the documentation for the methyvim function. If the package is not installed, the user gets an error. Please either make this package a dependency so that it will necessarily be installed, add a test to see whether the package is installed, and if not, install the package before loading, or at the very least add comments to the code example alerting users to the requirement that the package be installed.\n\nWhen trying to use the methyvolc function for a methytmle object with vim = “rr”, I get the following error:\nError in param > param_bound :\n\ncomparison (6) is possible only for atomic and list types\n\n3. Could an example be provided in the documentation for use with a continuous treatment?\n\n4. When I run the code:\n\nmethyvim_cancer_ate <- methyvim(data_grs = grs, var_int = var_int,\n\nvim = \"ate\", type = \"Beta\", filter = \"limma\",\n\nfilter_cutoff = 0.20, obs_per_covar = 2,\n\nparallel = FALSE, sites_comp = 250,\n\ntmle_type = \"glm\"\n\n)\n\nas provided in the vignette, I get the following error repeated 13 times:\n\nError in terms.formula(formula, data = data) :\n\n'.' in formula and no 'data' argument\n\n5. I get slightly different results from those in the vignette. For example, I get the following row as part of the output:\n\ncg01782097 -9.229141e-03 0.0010232901 0.0112757216 2.736161e-05 8.449024e-01\n\n6. Does the code have a stochastic component? If so, a seed should be set at the start of the vignette to ensure users get the same results as those in the vignette.\n\n7. In the plot produced by methyvolc, what is the “0” color label referring to? Would this ever have multiple values? Could a label for the legend be provided?\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1424
|
https://f1000research.com/articles/7-738/v1
|
13 Jun 18
|
{
"type": "Case Report",
"title": "Case Report: Diffuse T wave inversions as initial electrocardiographic evidence in acute pulmonary embolism",
"authors": [
"Ogechukwu Egini",
"Alix Dufresne",
"Mazin Khalid",
"Chinedu Egini",
"Eric Jaffe",
"Alix Dufresne",
"Mazin Khalid",
"Chinedu Egini",
"Eric Jaffe"
],
"abstract": "Acute pulmonary embolism (PE) is a life-threatening condition and is typically diagnosed by a combination of symptoms, clinical signs and imaging. Electrocardiogram may be helpful in diagnosis, and the most widely described pattern of occurrence is the so-called S1Q3T3 pattern. Here, we describe the case of an African-American male who presented with typical chest pain, diffuse T wave inversions with serial troponin elevation. There was initial concern for Wellen's syndrome but was finally diagnosed as acute PE. This case underscores the necessity of vigilance and a lower threshold for PE work up even in patients presenting as acute coronary syndrome.",
"keywords": [
"PE",
"T-waves",
"inversion"
],
"content": "Introduction\n\nAcute pulmonary embolism (PE) is responsible for 20–25% of sudden death in the United States1,2. It exacts a huge economic burden both on the sufferer and the health system with some estimates placing the annual cost of care between $7,594 to $16,644 per patient3. Prompt diagnosis is essential to reduce disease burden. The so-called S1Q3T3 pattern is the classic electrocardiogram (EKG) presentation in acute PE4 but is not seen in all acute PE cases. We present the case of acute PE with initial clinical presentation that mimicked acute coronary syndrome and an initial EKG pattern that suggested Wellen’s syndrome.\n\n\nCase report\n\nA 66 year old African-American male presented to the Emergency Room (ER) complaining of a 2-hour history of chest pain. Chest pain was described as left-sided non-pleuritic, non-radiating retrosternal, squeezing in character and persistent. Pain was reported as 9 on a 10-point pain scale and relieved by taking 0.4mg tablet of nitroglycerin sublingually. It was associated with shortness of breath, dizziness and sweating, but the patient denied loss of consciousness, cough, palpitation or swelling of the extremities. He denied any use of illicit substances. A week prior to this hospitalization he presented to the hospital with a similar complaint. At that time, chest pain was relieved by 325mg dose Aspirin taken orally; troponin was normal and EKG did not show any significant change from baseline. His echocardiogram was also normal and he was discharged with scheduled outpatient stress test. Medical history was significant for poorly-controlled diabetes type 2, hypertension, dyslipidemia and obesity.\n\nIn this visit, his pulse rate was 84 beats per minute; BP 119/66 mm/Hg; respiration rate 16 breaths per minute and his oxygen saturation was 98% on room air. Initial troponin was elevated at 0.19ng/ml (reference 0.00 – 0.05ng/ml); hemoglobin of 14.4g/dl (reference 13–17g/dl) and platelet count of 210 × 103/ul (reference 130–400 × 103/ul).\n\nEKG showed deep T wave inversions in leads V1-V6 and the inferior limb leads (Figure 1). We assumed an assessment of non-ST elevation myocardial infarction and a loading dose of Aspirin (325 mg) and Plavix (300 mg) were given orally in the ER along with Atorvastatin (80 mg) and a weight-based dose of Enoxaparin. Repeat troponin 6 hours later was 1.05. Left heart catheterization revealed normal coronaries. Oxygen saturation dropped to 91% in room air while the patient laid on catheterization table but improved with supplemental oxygen via nasal cannula. A repeat EKG at this time showed a Q3T3 pattern in lead III (Figure 2). This was followed by a computerized tomography of the chest with angiogram (chest CTA), which revealed a saddle pulmonary embolus which extended into the right and left pulmonary arteries and involved all lobar branches of the pulmonary arteries.\n\nTreatment was continued with Enoxaparin (100mg subcutaneously every 12 hours) for 6 days, at which time he became stable and maintained oxygen saturation above 96% even when supine. He was discharged on Apixaban (10mg po bid for 7 days followed by 5mg po bid) with plan to complete 3 months of therapy. Follow up visits were scheduled with the Cardiology and Hematology clinics.\n\n\nDiscussion\n\nAcute pulmonary embolism (PE) is caused by blockage of a pulmonary artery by blood clot. In one study, investigators found that the commonest clinical symptoms in acute PE patients were dyspnea, chest pain, syncope and hemoptysis4. A number of EKG findings have been described in acute PE patients but the classic EKG finding is the S1Q3T3 pattern5. The incidence of this pattern in acute PE is highly variable5. Other EKG changes have been reported in patients diagnosed with PE6 but there were initial supporting clinical evidence to warrant suspicion and further diagnostic testing for PE. On the contrary, our patient presented with features suggestive of acute coronary syndrome - typical chest pain, diffuse T wave inversions and elevated cardiac enzymes. Pulse rate, respiration rate and oxygen saturation were normal essentially making an acute PE assessment difficult at time of presentation. Given a background of significant cardiovascular risk factors, a coronary event was thought more likely. Deep T wave inversions on the precordial leads were concerning for Wellen’s syndrome7. The only clue to possible acute PE in our case was the transient desaturation that occurred during cardiac catheterization. This dictated the urgency of getting a chest CTA. The chest CTA is the gold standard for diagnosis of PE and was shown in the Prospective Investigation of Pulmonary Embolism Diagnosis II (PIOPED II) to have a high sensitivity and specificity for acute PE diagnosis and was also concordant with the pretest Well’s criteria8. A ventilation-perfusion (V/Q) scan may also effectively diagnose acute PE and is useful in renal insufficiency or contrast allergy. Treatment of acute PE is based on risk stratification. Anticoagulation is the mainstay of therapy and the duration of treatment is determined by a number of factors including provoked vs unprovoked PE and/or recurrence of acute PE. Those with acute PE and hypotension without significant bleeding risk require thrombolysis9. In some cases of massive PE with contraindication to or failure of systemic fibrinolysis, surgical or catheter embolectomy can be considered10.\n\n\nConclusion\n\nAcute pulmonary embolism should be considered as a differential in patients with deep T wave inversions on EKG who do not have typical PE presentation.\n\n\nConsent\n\nWritten informed consent for the publication of the patient’s clinical details and clinical images was obtained from the patient.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nHeit JA: The epidemiology of venous thromboembolism in the community: implications for prevention and management. J Thromb Thrombolysis. 2006; 21(1): 23–29. PubMed Abstract | Publisher Full Text\n\nWhite RH: The epidemiology of venous thromboembolism. Circulation. 2003; 107(23 suppl 1): I4–8. PubMed Abstract | Publisher Full Text\n\nSpyropoulos AC, Lin J: Direct medical costs of venous thromboembolism and subsequent hospital readmission rates: an administrative claims analysis from 30 managed care organizations. J Manag Care Pharm. 2007; 13(6): 475–486. PubMed Abstract | Publisher Full Text\n\nMiniati M, Cenci C, Monti S, et al.: Clinical presentation of acute pulmonary embolism: survey of 800 cases. PLoS One. 2012; 7(2): e30891. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTodd K, Simpson CS, Redfearn, DP, et al.: ECG for the diagnosis of pulmonary embolism when conventional imaging cannot be utilized: a case report and review of the literature. Indian Pacing Electrophysiol J. 2009; 9(5): 268–75. PubMed Abstract | Free Full Text\n\nBecoats K, Gunawardena V: Deep T-Wave Inversions: An Interesting EKG Manifestation of Pulmonary Embolism. Chest. 2017; 152(4): A266. Publisher Full Text\n\nOzdemir S, Cimilli Ozturk T, Eyinc Y, et al.: Wellens' Syndrome - Report of two cases. Turk J Emerg Med. 2016; 15(4): 179–181. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStein PD, Fowler SE, Goodman LR, et al.: Multidetector computed tomography for acute pulmonary embolism. N Engl J Med. 2006; 354(22): 2317–2327. PubMed Abstract | Publisher Full Text\n\nGuyatt GH, Akl EA, Crowther M, et al.: Executive summary: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012; 141(2 Suppl): 7S–47S. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJaff MR, McMurtry MS, Archer SL, et al.: Management of massive and submassive pulmonary embolism, iliofemoral deep vein thrombosis, and chronic thromboembolic pulmonary hypertension: a scientific statement from the American Heart Association. Circulation. 2011; 123(16): 1788–1830. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "35034",
"date": "03 Jul 2018",
"name": "Chibundo Uchenna Nwaneli",
"expertise": [
"Reviewer Expertise Heart failure",
"Hypertension",
"Echocardiography",
"Cardiovascular Risk factors"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article I reviewed is a case report of 66 year old African American who presented to the hospital with recurrent chest pain whose initial evaluation was thought to be consistent with coronary artery disease. He was subsequently found to have Acute pulmonary embolism on Chest CT angiogram and normal coronary vessels. This case is quite intriguing because of the presentation. It throws up some question I would like the authors to try to answer and offer explanation.\nThe initial presentation with chest pain which resolved with administration of 325mg aspirin: did the team think it was pulmonary embolism.Is it usual for embolism to respond with single low dose aspirin.\n\nThe relief of patient's pain with Nitroglycerin, is it typical with chest pain from embolism.\n\nIs it possible that we are dealing with Non ST segment myocardial infarction (NSTEMI) coexisting with pulmonary embolism.\n\nMyocardial infarction occurring in the setting of normal coronary arteries have been reported. Could this be the case here knowing fully well that this patient had multiple risk factors for coronary artery disease.\nThe EKG findings observed in the index patient have been described in pulmonary embolism as well as the elevated troponin. They could as well be found in NSTEMI.\n\nTreatment for submassive pulmonale embolism and NSTEMI both require anticoagulation.\nI dont know how many pictorials that are allowed for case report by the journal but I would like the authors to include the baseline EKG of the patient on his first presentation to the hospital. If also permissible the Chest CT and Coronary angiogram images.This I think will enable readers agree with their conclusions.\n\nOverall the case report was well written.\n\nThe take home message is that diagnosis of pulmonary embolism can be difficult especially when the features mimic other conditions such as acute coronary syndrome. EKG changes may give a clue to the diagnosis. As clinicians we know the inexactness of scientific data and we should have an open mind to other differential diagnosis.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "35031",
"date": "10 Jul 2018",
"name": "Chukwudi Obiagwu",
"expertise": [
"Reviewer Expertise Cardiology",
"heart failure",
"interventional cardiology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author describe a case of acute PE initially masquerading as acute MI. The authors do a good job presenting a manuscript that is easy to read. However, there are a few points to be noted:\nIntroduction: The authors on citing the cost of care for PE rely on an old article whereas there are more recent articles providing more current cost estimates. See citation added below1.\n\nThe authors use of punctuation marks while describing the characteristics of chest pain needs to be reviewed.\n\nIs \"in this visit\" the right phrase to use or \"on this visit\" the proper one?\n\nA review of this sentence \"Oxygen saturation dropped to 91% in room air while the patient laid on catheterization table but improved with supplemental oxygen via nasal cannula\" is needed.\n\nThe authors would do well to provide an image of the coronary angiogram and CTA PE protocol.\n\nPost coronary angiogram, while in the cath lab, did the patient still complain of chest pain, was he tachypneic, or in distress that led to desaturation to 91%? A lot health care providers will not get a CTA chest for O2 saturation of 91% without other symptoms.\nOn seeing normal coronary arteries, prior symptoms could have been ascribed to MINOCA. See citation 22.\n\nIs the background of the case’s history and progression described in sufficient detail? Partly\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": [
{
"c_id": "3952",
"date": "06 Sep 2018",
"name": "Ogechukwu Egini",
"role": "Author Response",
"response": "Thank you for the responses.We have updated Version 2 to reflect more recent analysis of the cost imperative of treating acute PE in a US hospital and have also included images of both coronary angiogram and an axial cut of the CTA PE. MINOCA is a consideration in this situation. However, given the 2 new findings of a drop in oxygen saturation and the change in EKG pattern, we thought it was best to rule out acute PE at that time."
}
]
}
] | 1
|
https://f1000research.com/articles/7-738
|
https://f1000research.com/articles/7-1420/v1
|
06 Sep 18
|
{
"type": "Case Report",
"title": "Case Report: Clinical manifestation and dental management of Papillon-Lefèvre syndrome",
"authors": [
"Yasmin Mohamed Yousry",
"Amr Ezzat Abd EL-Latif",
"Randa Youssef Abd El-Gawad",
"Amr Ezzat Abd EL-Latif",
"Randa Youssef Abd El-Gawad"
],
"abstract": "Background: Papillon-Lefèvre syndrome (PLS) is considered a rare syndrome, which is characterized by the presence of palmar-plantar hyperkeratosis and aggressively progressing periodontitis that finally leads to premature loss of both deciduous and permanent teeth. Case report: A four-year-old Egyptian boy presented with a maternal complaint that her child suffers from early loss of many teeth, presence of loose teeth along with an asymptomatic swelling related to the upper anterior area. The patient was diagnosed with PLS. A symptomatic management and prevention program was followed and the swelling was excised; afterwards diagnosed as peripheral ossifying fibroma. Conclusion: Early recognition and intervention for patients with PLS is essential to avoid the threat of being edentulous if left unmanaged.",
"keywords": [
"Papillon – Lefèvre syndrome",
"Periodontitis",
"Premature tooth loss",
"Palmoplantar keratosis."
],
"content": "Introduction\n\nPapillon-Lefèvre syndrome (PLS) is an autosomal recessive disorder that typically becomes apparent from one to five years of age, which coincides with the timing of eruption of primary dentition. The estimated prevalence of the syndrome is 1–4 cases per million individuals1.\n\nThe exact etiopathogenesis of the syndrome is relatively unclear and different etiological factors have been suggested, such as immunologic, genetic or bacterial, but recently it was suggested that mutations of cathepsin C gene, which results in deficiency of cathepsin C enzymatic activity, to be the possible etiological factor. This was supported by the fact that expression of the cathepsin C gene occurs mainly in epithelial regions, such as the soles, palms and keratinized oral gingiva, which are the most affected areas in patients with PLS2.\n\nAn important feature of the syndrome is the presence of palmoplantar hyperkeratosis; its onset usually occurs between the ages of one to four years and usually involves the palms of the hands and soles of the feet3. Another major feature is severe gingivostomatitis and periodontitis. Deciduous teeth usually erupt in normal sequence, timing and with normal structure and form, although it was reported that some cases may have microdontia and incomplete root formation4.\n\nFirst, the gingiva becomes inflamed and then rapid destruction of periodontium occurs. This is manifested in the form of redness and swelling in the gingiva with severe bone resorption and periodontal pockets. Patients usually suffer from looseness, drifting, migration, and exfoliation of teeth so that by the age of 4–5 years all primary teeth are prematurely exfoliated and the same cycle is repeated with permanent teeth5.\n\nA multidisciplinary approach for the management of cases with PLS is usually required and periodontal treatment, if started early, will decrease the rate of periodontal destruction6.\n\nWe hereby report a rare case that, to the best of our knowledge, may be the first for a child with PLS together with peripheral ossifying fibroma lesion that is not a characteristic feature for the syndrome.\n\n\nCase report\n\nA four-year-old Egyptian boy presented to the Pediatric Dental Clinic, Faculty of Dentistry, Cairo University, suffering from premature loss of anterior teeth, friable and bleeding gums and swelling related to the upper anterior region. Medical history revealed absence of any medical problems; family history revealed that neither parents nor siblings had the same problem and the parents were not of consanguineous marriage.\n\nExamination of the palms of the hand revealed normal skin, while the soles of the feet revealed very slight hyperkeratosis (Figure 1a,b). Intraoral examination revealed severe gingival recession; inflammation especially in anterior region; aggressive periodontitis; mobility of maxillary left central incisor and canine, with swelling related to the maxillary right missed canine region extending toward occlusal surface. The swelling appeared as a solitary rounded lesion, with onset gradual for 2 months. The size of the swelling was 4×4 mm, and upon palpation it was not tender but slightly hemorrhagic (Figure 2a,b).\n\nPhotographs of (a) the palms of the hands showing normal skin and (b) the soles of the feet showing very slight hyperkeratosis.\n\nIntraoral photographs showing (a) severe gingival recession and inflammation, especially in anterior region, and aggressive periodontitis; (b) swelling related to the maxillary right missed canine region extending toward occlusal surface.\n\nRadiographic examination showed severe destruction and loss of alveolar bone (Figure 3). Lab investigations were normal (Table 1).\n\nTaking into consideration the clinical features and investigations, a diagnosis of PLS was confirmed.\n\nConventional periodontal treatment in the form of scaling and root planning was performed. Antibiotic amoxicillin and metronidazole (250 mg, 3 times daily) for one week along with a mouth rinse (0.2% chlorhexidine gluconate, 10 mL twice daily) was prescribed to the patient7.\n\nExtraction of the maxillary left central and canine teeth was advised, but the parent refused even after the risk was explained of not extracting these loose teeth.\n\nAfter laboratory investigations, excisional biopsy of the swelling was done under antibiotic coverage and local anesthesia. Thorough curettage of the adjacent periodontal ligament and periosteum was carried out to prevent recurrence (Figure 4 a,b). Histopathological examination revealed the lesion as peripheral ossifying fibroma (Figure 5).\n\nPhotograph showing (a) removal of the swelling and (b) excisional biopsy of the swelling.\n\nThe patient was educated for oral hygiene and scheduled for a follow-up visit every month for scaling and checking the condition of the patient.\n\nThe patient was followed up for 2 years during which loss of maxillary left central incisor occurred and extraction of loose upper left canine was done with no recurrence of the lesion (Figure 6). The palms of the hands revealed no change, while examination of the soles of the feet showed slight increase in keratosis (Figure 7 a,b).\n\nFollow-up photographs after 2 years showing (a) absence of change in the palms of the feet and (b) slight increase in keratosis in the soles of the feet.\n\n\nDiscussion\n\nPapillon-Lefèvre syndrome (PLS) is inherited as an autosomal recessive disorder where the parents of the patient with PLS should have the autosomal gene for the syndrome in order to manifest in their offspring. However, in the present case the parents are clinically healthy with no family history of the disorder. Studies have shown that when carrier parents for the affected gene mate, there is a 25% chance that they have an affected offspring8. This could explain the reason that the child had the syndrome although his parents were clinically healthy.\n\nThe intraoral appearance of severe aggressive periodontitis, which appears at the age of 3–4 years following complete eruption of primary teeth as seen in this case, concurred with observations in similar reported cases in the literature where primary teeth develop normally but eruption is accompanied with severe gingivitis followed by periodontal destruction, resulting in early loss of primary teeth9.\n\nUllbro et al.10 suggested that the two major components of PLS (palmar-plantar hyperkeratosis and aggressively progressing periodontitis) are not related to each other, as these authors found absence of association between the degree of hyperkeratosis and severity of periodontitis. This is in accordance with our case as the degree of hyperkeratosis is slight although periodontitis is severe.\n\nAcrodynia, hypophosphatasia and cyclic neutropenia are differential diagnoses of PLS. This case is not acrodynia due to absence of erythrocyanosis, insomnia, and teeth erupting prematurely with dystrophic enamel. It is not hypophosphatasia due to normal level of alkaline phosphatase and it is not cyclic neutropenia, as in cyclic neutropenia the palmoplantar hyperkeratosis is absent11.\n\nManagement of cases with PLS should be multidisciplinary with dentists, dermatologists and pediatricians. Early diagnosis and management of oral problems help in reducing the undesirable sequelae of the syndrome. Following the treatment protocol for periodontal therapy proposed by Ullbro et al.10 periodontal deterioration can be minimized. This includes: scaling and polishing; giving systemic antibiotics aimed at eliminating the reservoir of causative organisms; extraction of teeth having poor prognosis; giving instructions for maintenance of oral hygiene; and continuous monitoring and frequent recall appointments.\n\nIn the present case an early diagnosis of PLS and a treatment protocol minimized the periodontal deterioration and prevented further loss of other teeth. The parents were satisfied by these results.\n\n\nConsent\n\nWritten informed consent for publication of the clinical details and images was obtained from the patient's mother.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nHattab FN, Rawashdeh MA, Yassin OM, et al.: Papillon-Lefèvre syndrome: a review of the literature and report of 4 cases. J Periodontol. 1995; 66(5): 413–420. PubMed Abstract | Publisher Full Text\n\nHart TC, Hart PS, Bowden DW, et al.: Mutations of the cathepsin C gene are responsible for Papillon-Lefèvre syndrome. J Med Genet. 1999; 36(12): 881–887. PubMed Abstract | Free Full Text\n\nJanjua SA, Khachemoune A: Papillon-Lefèvre syndrome: case report and review of the literature. Dermatol Online J. 2004; 10(1): 13. PubMed Abstract\n\nFahmy MS: Papillon-Lefevre syndrome: Report of four cases in two families with a strong tie of consanguinity. A clinical, radiographic, haematological and genetic study. J Oral Med. 1987; 42: 263–268.\n\nMahajan VK, Thakur NS, Sharma NL, et al.: Papillon-Lefèvre syndrome. Indian Pediatr. 2003; 40(12): 1197–1200. PubMed Abstract\n\nAshri NY: Early diagnosis and treatment options for the periodontal problems in Papillon-Lefèvre syndrome: a literature review. J Int Acad Periodontol. 2008; 10(3): 81–6. PubMed Abstract\n\nKellum RE: Papillon-Lefèvre syndrome in four siblings treated with etretinate. A nine-year evaluation. Int J Dermatol. 1989; 28(9): 605–608. PubMed Abstract | Publisher Full Text\n\nKulasekara B: Hyperkeratosis palmoplantaris (Papillon-Lefèvre syndrome). A case report. Trop Geogr Med. 1988; 40(3): 257–8. PubMed Abstract\n\nHattab FN, Amin WM: Papillon-Lefèvre syndrome with albinism: a review of the literature and report of 2 brothers. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2005; 100(6): 709–16. PubMed Abstract | Publisher Full Text\n\nUllbro C, Crossner CG, Nederfors T, et al.: Dermatologic and oral findings in a cohort of 47 patients with Papillon-Lefèvre syndrome. J Am Acad Dermatol. 2003; 48(3): 345–351. PubMed Abstract | Publisher Full Text\n\nNagaveni NB, Suma R, Shashikiran ND, et al.: Papillon-Lefevre syndrome: Report of two cases in the same family. J Indian Soc Pedod Prev Dent. 2008; 26(2): 78–81. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "38401",
"date": "19 Sep 2018",
"name": "Marwa Mokbel ElShafei",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe case report is concerned about a case of a 4 year-old boy suffering from looseness of some teeth and loss of many others. Palmar-plantar hyperkeratosis is noticed on his palms and soles, although not severe but well detected. Aggressive progressive periodontitis is diagnosed as the cause of loss of teeth. A painless swelling is found on the gingiva related to the upper anterior teeth; this swelling was excised and diagnosed as a peripheral ossifying fibroma.\n\nFollow up and scheduled scaling and polishing to prevent sequel of aggressive periodontitis is the management chosen for this patient.\nAnother key word to be added \"peripheral ossifying fibroma\" Another photomicrograph needed to confirm presence of calcification and a possible immunohistochemical staining with cathepsin C and with calcitonin is an option. State how long did the monthly follow up remained. How did you restore the lost permanent central incisor and how would you prevent future loss and looseness of teeth due to the syndrome's periodontitis.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "38617",
"date": "05 Oct 2018",
"name": "Noha Ezzat Sabet",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe case report is quite informative, well written. Clearly and easily understood. The subject is addressed obviously and the diagnostic procedures are to clarify the point of interest. The results are professionally discussed and the conclusion that calls for early diagnosis to minimize the progress of dental loss and periodontal deterioration is of great interest. I think the authors should have clarified did they or did not restore the missing teeth. Also I would recommend a longer time of follow up to the case to ensure the condition of the permanent teeth after their eruption and to assure that its eruption time is not affected by the periodontal condition.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1420
|
https://f1000research.com/articles/7-1419/v1
|
06 Sep 18
|
{
"type": "Research Article",
"title": "Ultrasound imaging aids the learning of the landmark technique for lumbar puncture in novice learners in a secure training environment",
"authors": [
"Rune Sarauw Lundsgaard"
],
"abstract": "Background: Performing a lumbar puncture (LP) is a key skill to master for doctors in the emergency department (ED), but the level of success and rate of complications still differs considerably. Studies have shown that LP is attempted at a different level than intended in 30% of cases when the classical landmark technique is used. Ultrasound-assisted LP can reduce the risk of failed LP, possible by its ability to visualize the relevant anatomy of the spine, but only a few studies have considered its potential in learning environments with novice learners. Methods: Medical students and first-year trainee doctors who had never performed a LP in a clinical setting were asked to locate and mark the exact location of where to insert the needle when performing a LP by first using the classical landmark technique secondly by use of ultrasound. Corrections of the marked locations were registered for each attempt. Each participant marked three different healthy volunteers in both sitting and laying positions. Results: The accuracy in LP by landmarking as a total of correct markings (“unchanged”) vs. a total of incorrect markings (“changed”) improved significantly in both the sitting (p = 0.028) and lying positions (p=0.002). All participants were positive about the use of ultrasound when learning how to identify the correct LP marking, mostly because of improved understanding of the anatomical structures and improved confidence in the succeeding attempts. Conclusion: Ultrasound can assist and potentially increase the learning of landmark techniques in novice learners in a secure training environment. Visualizing the underlying anatomical structures of the landmark technique in this way can add a second level of security for the learner for the future practice of LP.",
"keywords": [
"lumbar puncture",
"ultrasound guidance",
"novice learners"
],
"content": "Introduction\n\nPerforming a lumbar puncture (LP) is a key skill to master for medical doctors in the emergency department (ED) for diagnosing severe conditions such as life-threatening meningitis or subarachnoid hemorrhage (Evans et al., 2018; Stewart et al., 2014). The standard procedure, using anatomical landmarks to identify the correct intervertebral level, has not changed since it was developed in 1891 (Doherty & Forbes, 2014). However, even though LP is a core skill, the level of success and rate of complications still differs considerably between doctors (Evans et al., 2018), and studies have shown that LP is attempted at a different level than intended in 30% of attempts when the classical landmark technique is used (Duniec et al., 2013; Evans et al., 2018).\n\nNewer techniques to improve accuracy and rate of success when performing LP have been developed but have not yet become a new standard of care (Evans et al., 2018; Peterson et al., 2014; Stewart et al., 2014). Ultrasound-assisted LP can reduce the risk of failed LP in some cases, suggesting the ability to visualize the relevant anatomy of the spine may contribute to its success (Shaikh et al., 2013).\n\nConsequently, interest has been growing in the use of ultrasound in LP, but its cost-effectiveness has been questioned, especially due to increasing time to perform the procedure and the need for dedicated training in the ultrasound equipment (Duniec et al., 2013; Peterson et al., 2014; Pisupati et al., n.d.; Shaikh et al., 2013).\n\nContrastingly, to the growing the increased attention mentioned above, only a few studies have explored the effect of using LP when teaching LP to medical students and novice medical doctors (Grau et al., 2003). Even though ultrasound has demonstrated the potential to improve learning curves in more experienced doctors performing spinal procedures (Grau et al., 2003), so far only the classic landmarks technique is taught at Danish medical schools.\n\nThis study aims to explore how the untrained use of ultrasound affects the ability of inexperienced LP performers to identify the intended intervertebral level in LP procedure using the classical landmark technique. The study focuses on novice doctors and medical students with no prior experience or training in using ultrasound.\n\n\nMethods\n\nMedical students and first-year trainee doctors who had never performed a LP in a clinical setting were recruited by direct contact of R.S.L. in the ED of a Nykøbing Falster Hospital, a rural hospital in Eastern Denmark. Each participant performed three non-invasive sessions (drawing with a surgical pen) on three different healthy volunteers (medical professionals and students at the site). The volunteers were also recruited by direct contact of R.S.L. in the ED.\n\nA total of six medical students and six first-year training doctors were included in the study. Each participant (learner) marked locations for LP by landmark and ultrasound on three different volunteers sitting and laying down. As the result of each marking was registered as changed or unchanged location (by ultrasound) this generated a total of 72 datasets, six for each participant (learner).\n\nNo specific instruction about LP procedure was given before enrolling, but participants were asked to go through to the hospital’s procedural guidelines about LP and study the relevant anatomy if needed. None of the participants had any experience in using ultrasound. The research was exempted from ethical approval by the Regional Ethics Committee, Zealand, Denmark. Participants and volunteers were verbally instructed and written informed consent was obtained on-site. Data were anonymized.\n\nSessions were performed in an available patient room in the ED using a regular hospital bed.\n\nBefore the procedure, each participant was briefly interviewed on how to perform LP, to make sure their theoretical knowledge was solid. All participants were rated as having equivalent knowledge, and all participants expressed the intervertebral space of L3-L4 as their aim when performing the LP by landmark technique.\n\nEach participant was asked to locate and mark the exact location of where to insert the needle when performing a LP. The location was marked using a surgical marker. Each participant marked the volunteer as the volunteer was first sitting down and then lying down. Each mark was given a number and was visible during the entire session. The marks were afterward removed with 85% ethanol skin disinfectant. Each participant marked three volunteers, (e.g. A, B and C) consecutively. Volunteers were used for more than one, but not all participants due to feasibility of the study. A total of 15 volunteers were included.\n\nAfter locating the intended intervertebral level of the LP in both sitting and lying position, the participant was handed the liner probe of the Philips Sparq Ultrasound System with “Simplicity Mode” enabled. Only the linear probe (Philips L12-3) was mounted to the ultrasound workstation. On the Philips Sparq, “Simplicity Mode” was enabled as standard and only the depth setting was adjusted during the session. Adjustment of the depth setting was done by the investigator when asked for. The ultrasound workstation was placed on the opposite side of the bed then the participant and the screen was adjusted to either the left or right side of the volunteer depending on participant preference. The participant was now asked to locate the intravertebral space of L3-L4 using the ultrasound workstation and linear probe as a guide. An ultrasonic gel was applied to the probe before handing it to the participant. No additional instruction on reading the ultrasonic image was given (Speer et al., 2013).\n\nThe location found using ultrasound was rated as being “correct” and secondly controlled by the investigator R.S.L. R.S.L. is an experienced clinician with more than 5 years of practice in LP. It was recorded whether the participant chose a different location for spinal access when using ultrasound (“changed”) or was satisfied with the area already marked (“unchanged”).\n\nQualitative data in the form of spontaneous comments about the learning experience was gathered in the form of informal conversation during the sessions. Qualitative data were written down as keywords in Danish by the R.S.L. and transcribed afterward.\n\nThe sample size was calculated using ClinCal online software. Power was set to 85% and alpha 0.05; population was 70% and study group 30%. P values were calculated using Student’s t-test with Microsoft Excel (2016) software (Barath & Rosner, 1992).\n\n\nResults\n\nAll ultrasound markings were found (by investigators control) to be at the intended intervertebral level (L3/L4).\n\nSuccess rate by participants of LP markings is shown in Table 1 (sitting: S1, S2, S3 and lying down L1, L2, L3). The mean success rate of LP by landmarks only was 21%. The first attempt (S1) had a success rate of 0%.\n\nThe improvement of accuracy in LP marking is shown in Table 2 as a total of correct markings (“unchanged”) vs. a total of incorrect markings (“changed”) for each attempt (S1, S2, S3 and L1, L2, L3). Markings of LP by landmark improved significantly in both sitting (p = 0.028) and lying positions (p=0.002).\n\nAll participants chose to change at least one of three marks when using ultrasound. Especially the marks in the first session (S1 + L1) was altered by most participants.\n\nAll participants expressed positivity on the use of ultrasound when learning how to identify the correct LP marking. Half of the participants would consider using ultrasound in the same manner when teaching procedures to colleagues and students in the future. The main reasons for this were: improved understanding of the anatomical structures and improved confidence in the succeeding attempts. Keywords can be found in Dataset 1 (Lundsgaard, 2018).\n\n\nDiscussion\n\nThe precision on marking the correct location for LP using the landmark technique only increased markedly in the novice learners between sessions. This finding complies with the finding of Grau et al. (2003), who found that ultrasound significantly improved the learning curves in spinal procedures. This study did not specifically seek to explore the possible reasons for the improved learning. However, the participants verbally expressed that the ultrasound aided their understanding of the anatomical structures and served as a “quality check” when learning the procedure. These findings are in line with previous work suggesting that one of the strengths in using ultrasound for LP is the visualization of anatomy, e.g., in complex patients, such as the elderly or patients with high BMI (Ansari et al., 2014; Peterson et al., 2014; Pisupati et al., n.d.; Shaikh et al., 2013).\n\nThe success rate of correct location by landmark technique only in our participants was lower (21%) than in studies of more experienced doctors (Duniec et al., 2013), corresponding to the low level of our participants’ clinical experience. As the participants were novices with an initial success rate of 0%, the ultrasound aided their learning from the beginning, potentially reducing the risk of forwarding failure and the need for unlearning inexpedient techniques, which is both stressful and challenging to the learner (Heydari et al., 2017; Rushmer & Davies, 2004). Using ultrasound clinically on a regular basis might be out of scope for most novice doctors, but as a simple training aid, it opens new opportunities.\n\nThis study concludes that putting an ultrasonic probe in the hand of novice learners in a secure training environment may be highly beneficial, and could add a second level of security for the learner for the future practice of the landmark LP technique.\n\n\nData availability\n\nDataset 1. Raw data LP markings and keywords identified during discussions surrounding use of ultrasound in lumbar puncture. Please note that keywords are in Danish. https://doi.org/10.5256/f1000research.16133.d216926 (Lundsgaard, 2018).",
"appendix": "Grant information\n\nThe author declare that no grants were involved in supporting this work.\n\n\nReferences\n\nAnsari T, Yousef A, El Gamassy A, et al.: Ultrasound-guided spinal anaesthesia in obstetrics: is there an advantage over the landmark technique in patients with easily palpable spines? Int J Obstet Anesth. 2014; 23(3): 213–216. PubMed Abstract | Publisher Full Text\n\nBarath E, Rosner BA: Fundamentals of Biostatistics. Biometrics. 1992; 48(3): 976. Publisher Full Text\n\nDoherty CM, Forbes RB: Diagnostic Lumbar Puncture. Ulster Med J. 2014; 83(2): 93–102. PubMed Abstract | Free Full Text\n\nDuniec L, Nowakowski P, Kosson D, et al.: Anatomical landmarks based assessment of intravertebral space level for lumbar puncture is misleading in more than 30%. Anaesthesiol Intensive Ther. 2013; 45(1): 1–6. PubMed Abstract | Publisher Full Text\n\nEvans DP, Tozer J, Joyce M, et al.: Comparison of Ultrasound-Guided and Landmark-Based Lumbar Punctures in Inexperienced Resident Physicians. J Ultrasound Med. 2018. PubMed Abstract | Publisher Full Text\n\nGrau T, Bartusseck E, Conradi R, et al.: Ultrasound imaging improves learning curves in obstetric epidural anesthesia: a preliminary study. Can J Anaesth. 2003; 50(10): 1047–1050. PubMed Abstract | Publisher Full Text\n\nHeydari A, Moghaddam KB, Manzari ZS, et al.: Mental challenges of nurses in the face of unlearning situations in hospitals: A qualitative study. Electron Physician. 2017; 9(9): 5237–5243. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLundsgaard RS: Dataset 1 in: Ultrasound imaging aids the learning of landmark technique (lumbar puncture) in novice learners in a secure training environment. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.16133.d216926\n\nPeterson MA, Pisupati D, Heyming TW, et al.: Ultrasound for routine lumbar puncture. Acad Emerg Med. 2014; 21(2): 130–136. PubMed Abstract | Publisher Full Text\n\nPisupati D, Heyming TW, Lewis RJ: Effect of ultrasonography localization of spinal landmarks on lumbar puncture in the emergency department. Elsevier.\n\nRushmer R, Davies HT: Unlearning in health care. Qual Saf Health Care. 2004; 13 Suppl 2: ii10–ii15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShaikh F, Brzezinski J, Alexander S, et al.: Ultrasound imaging for lumbar punctures and epidural catheterisations: systematic review and meta-analysis. BMJ. 2013; 346: f1720. PubMed Abstract | Publisher Full Text\n\nSpeer M, McLennan N, Nixon C: Novice learner in-plane ultrasound imaging: which visualization technique? Reg Anesth Pain Med. 2013; 38(4): 350–352. PubMed Abstract | Publisher Full Text\n\nStewart H, Reuben A, McDonald J: LP or not LP, that is the question: gold standard or unnecessary procedure in subarachnoid haemorrhage? Emerg Med J. 2014; 31(9): 720–723. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "38407",
"date": "08 Oct 2018",
"name": "Asoka Weerasinghe",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFirstly I must say this is an interesting article on an emerging subject.\nIn respect of current literature review I think the author has included the relevant evidence.\n\nHowever there are few descriptions missing in the methodology\n\n1) It is stated that RSL recruited the participants by direct contact. Then the question is what the selection criteria was used?\n\n2) Similarly how was the volunteers selected?\n3) Although it is stated that no prior knowledge was tested, it is mentioned under 'location\" that their theoretical knowledge was tested to be solid which contradicts with the previous statement. I think we need to know how was the knowledge was tested eg / questionnaire is so need a publish the same\n\n4) The article says that the participants did not have any US exposure but managed to identify the sonoanatomy which I find quite difficult to believe. As a clinician who teachers this for different levels of clinicians, they need to be taught how to identify the sonoanatomy especially the reference point (saccrum as the flat surface on the image and the LS view should me paramedian etc) when marking the spinal levels. So I think we need more information on the same.\n\n5) Usually these scans are done using the curvilinear probe due to depth requirement but it is stated that they used liner probe, hence again the question of the selection criteria of volunteers need to explained\n\n6) It is stated that total of 15 volunteers used. so what the provision made to account for the difference between the volunteers scanned in the analysis ?\n\n7) It states that RSL decided the marking as correct or not, again the reader need to have a more description re what is meant by correct position how was it determined ?\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "41627",
"date": "17 Dec 2018",
"name": "Rein Ketelaars",
"expertise": [
"Reviewer Expertise Ultrasonography in Prehospital and Emergency Medicine"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for the opportunity to read your interesting work on evaluating US-guided lumbar puncture in the training of students and young doctors.\nDespite the interesting read, I have to make some comments.\nIn general\nReading your paper, I understand that the markings where the LP should be performed improve after using ultrasound. I'm not sure how this demonstrates that ultrasound aids/\"may be highly beneficial\"/adds a second level of security in learning. What is the first level of security? Also, It is not clear to me if the marking is alway correct after using ultrasound. US may often change the marking, but are the markings 100% correct after correcting? How do you define a secure training environment? It is stated in the title and discussion to be secure. But what elements contribute to this security? Please check spelling and grammar for readability. For instance the use of \"laying\" in both tables and the methods section. \"Liner probe\"\nAbstract\nPlease be more exact in the abstract. For instance the methods section of the abstract. It is not clear to me that the markings are sometimes changed based on ultrasound.\nIntroduction\nSecond last paragraph: \"using LP when teaching LP...\" might not be correct.\nMethods section\nIn what time period was the study performed? Since US is adopting progressively among (young) doctors, this is relevant information. Maybe you could provide the reader with some more information about the volunteers: age, BMI, previous illnesses (or the absense thereof). Inclusion/exclusion criteria of learners and volunteers? - Especially since you state the interviews that were held to determine the theoretical knowledge of the learners. How can you expect medical students and young doctors with apparently no experience with US imaging to identify the L3/L4 intervertebral space? Especially since \"No additional instruction on reading the ultrasonic image was given.\" -- Also, some more explanation is needed why you refer to the paper by Speer 2013. Power calculation was based on what previous study results? Some explanation is needed here. Student's T-test? compared the number of correct markings without and with ultrasound. But these are paired dichotomous values. Perhaps a McNemar's test (or ANOVA?) is a far more appropriate test. Please consult a statistician on this matter.\nResults\nTo my understanding you have determined the success rate of the landmark technique by letting the learners check with ultrasound? In other words: if all marks remain unchanged after US, the score would be 100% correct. In the methods section you described the investigator to judge all US markings to be \"correct\" or not. But these data are not reported in the results section or discussion. I suppose the sentence \"All ultrasound markings were found ...\" should be expanded with a confidence interval? What does current literature say about the accuracy of determining the correct intervertebral space by experienced clinicians? Is it always 100% correct? Please consider adding some descriptive statistics and confidence intervals? You didn't compare the accuracy of the landmark technique between sessions. Did the learners perform better in the second session when compared to the first? And in the third? I feel this information is vital since this might demonstrate any learning effect and thus substantiate the title. It might have been interesting to know how incorrect markings were distributed between too low and too high.\nDiscussion\n\"Using ultrasound clinically on a regular basis might be out of scope...\" - this depends on the time and place where novice doctors are employed. Times are a changing!\n\nThank you again for reading your interesting paper. I feel it still needs some work, though.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNo\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1419
|
https://f1000research.com/articles/7-963/v1
|
28 Jun 18
|
{
"type": "Correspondence",
"title": "Major histocompatibility complex (MHC) fragment numbers alone – in Atlantic cod and in general - do not represent functional variability",
"authors": [
"Johannes M. Dijkstra",
"Unni Grimholt",
"Unni Grimholt"
],
"abstract": "This correspondence concerns a publication by Malmstrøm et al. in Nature Genetics in October 2016. Malmstrøm et al. made an important contribution to fish phylogeny research by using low-coverage genome sequencing for comparison of 66 teleost (modern bony) fish species, with 64 of those 66 belonging to the species-rich clade Neoteleostei, and with 27 of those 64 belonging to the order Gadiformes. For these 66 species, Malmstrøm et al. estimated numbers of genes belonging to the major histocompatibility complex (MHC) class I lineages U and Z and concluded that in teleost fish these combined numbers are positively associated with, and a driving factor of, the rates of establishment of new fish species (speciation rates). They also claimed that functional genes for the MHC class II system molecules MHC IIA, MHC IIB, CD4 and CD74 were lost in early Gadiformes. Our main criticisms are (1) that the authors did not provide sufficient evidence for presence or absence of intact functional MHC class I or MHC class II system genes, (2) that they did not discuss that an MHC subpopulation gene number alone is a very incomplete measure of MHC variance, and (3) that the MHC system is more likely to reduce speciation rates than to enhance them. We conclude that their new model of MHC class I evolution, reflected in their title “Evolution of the immune system influences speciation rates in teleost fish”, is unsubstantiated. In addition, we explain that their “pinpointing” of the functional loss of the MHC class II system and all the important MHC class II system genes to the onset of Gadiformes is preliminary, because they did not sufficiently investigate the species at the clade border.",
"keywords": [
"fish",
"MHC",
"Atlantic cod",
"evolution",
"speciation rate"
],
"content": "Correspondence\n\nIn the below, we explain our criticisms of the Malmstrøm et al.1 article as they are summarized in our abstract.\n\nWhen was the MHC class II system lost in Gadiformes? The data as presented by Malmstrøm et al.1 suggest a simultaneous loss of major histocompatibility complex (MHC) IIA, MHC IIB, CD4 and CD74 functions at the evolutionary onset of Gadiformes (see their Figure 2). However, within their datasets for gadiform fishes, sequence reads that represent these genes can readily be detected (Table S1 and Supplementary File 1). These sequence read numbers are much lower than found for the non-gadiform fish, and they may be contaminations, but that should be appropriately tested. Meanwhile, for several non-gadiform fishes, including for S. chordatus which among the investigated fishes is the species closest related to Gadiformes, there are no full-length MHC IIA, MHC IIB, CD4 or CD74 gene sequences in the unitig and scaffold datasets presented by Malmstrøm et al.1 (Supplementary File 2 and Table S2). We agree with the conclusion by Malmstrøm et al.1 that their data suggest that throughout Gadiformes there is no canonical MHC class II system. However, as for the evolutionary timings of the loss of an intact MHC class II system and of the losses of the individual MHC class II system genes, we find their study technically wanting and preliminary. The combination of (i) not finding intact full-length sequences for all important MHC class II system genes in species closely related to Gadiformes, while (ii) finding reads of these genes in gadiform fishes, prohibits what the authors call “pinpointing the loss of MHC II pathway genes to the common ancestor of Gadiformes”. At least for a few species at either side of the Gadiformes clade border, Malmstrøm et al.1 should have substantiated their claims by addition of specific PCR plus sequencing analyses, which should confirm presence of full-length intact MHC class II genes in the non-gadiform fishes, and their absence in the gadiform fishes.\n\nDiscussion of the MHC class I counting strategy by Malmstrøm et al.1 Whereas our criticisms of the MHC class II system analysis by Malmstrøm et al.1 are about technical issues and the preliminary character of their conclusions, we more fundamentally disagree with their analyses and discussions of MHC class I. The authors assumed1, as postulated by other researchers before them, that there can be a “copy number optimum” of MHC genes affected by a tradeoff between a higher number allowing the presentation of more pathogen antigens while also having a depletion effect on the T cell population. Regardless of the extent to which this mostly theoretical concept is true2, the MHC counting strategy by Malmstrøm et al.1 should be deemed incomplete and far too simplistic. For their number determination Malmstrøm et al.1 solely relied on estimation of U plus Z lineage genomic α3 exon fragment numbers, despite that the typical “birth and death” mode of MHC evolution can produce many pseudogenes3. The decision of the authors to only count U plus Z lineage gene fragments was based on their unsubstantiated perception that (neo-)teleost U and Z molecules “predominantly” bind peptide ligands1. However, not all teleost U and Z molecules are expected to present peptides4,5, for example this is not expected for the majority of U lineage molecules in the neoteleost fish medaka6 and the non-neoteleost fish rainbow trout7; how this is in the majority of the species investigated by Malmstrøm et al.1 remains to be determined. Furthermore, it should be realized that MHC class II and non-peptide-binding MHC class I molecules (like maybe teleost molecules of the MHC class I lineages L, P and S4) also can contribute to T cell depletione.g.8. Peculiarly, while from their referencing it follows that Malmstrøm et al.1 were aware of an MHC class II impact on T cell depletion, the authors did not look at MHC class II when investigating their optimum MHC number model. A more general shortcoming of the article1 is the lack of awareness that the direct determiner of the levels of “antigen coverage” and T cell depletion is the variation between the relevant MHC molecules2, rather than merely the MHC gene copy number. Table 1 (with detailed explanations in Supplementary File 3) summarizes that different teleost fish species can have very different levels of variation between MHC molecules4, and that despite its many U lineage gene copies the extent of MHC variation in Atlantic cod can be considered as relatively limited. Previously, we showed that salmon, zebrafish and eel share variation in U lineage sequences, dating from probably more than 300 million years ago (MYA), whereas all U lineage variation found within the neoteleost fishes stickleback and Atlantic cod probably was established only after these two species separated around 150 MYA4. Without experimental evidence, it cannot simply be assumed that “antigen coverage” and/or T cell depletion are highest in fishes with the highest counts of U plus Z α3 fragments, while neglecting levels of variance among the intact U and Z molecules and possible presences of other categories of MHC molecules. As a last critical comment we point out that, in stark contrast to the evolution of any other known MHC lineage, most deduced Z lineage molecules are characterized by a putative peptide binding groove which was almost perfectly conserved since >400 MYA4; this questions the model by Malmstrøm et al.1 that Z lineage evolution was driven by pathogen antigen variation, and is a further argument against the use of combined U+Z numbers for analysis of MHC evolution.\n\nThis table shows the lowest percentages of amino acid sequence identities between membrane-distal domains (α1+α2 for MHC I, α1 for MHC IIA, β1 for MHC IIB) of same category MHC molecules found between reported sequences of the same species. In some species no genes for particular categories were found (black boxes), while in other instances only one seemingly intact gene sequence was found (1 sequence) or only pseudogenes were found (pseudogene). A more detailed explanation of this table is provided in Supplementary File 3.\n\nDiscussion of the model by Malmstrøm et al.1 saying that U+Z numbers in teleost fish affect speciation rates and that the half-life for reaching the U+Z optimum number is 23 million years. Malmstrøm et al.1 postulated their multiple-regime Ornstein-Uhlenbeck model with very slow progress towards optimum MHC numbers because it was the best fitting model among the few models that they tested. However, an even better fitting model would be that in each species the respective optimal U and Z gene organizations were achieved. Further criticism is that their calculation methods for optimum U+Z numbers and half-life periods incorporated calculations of U+Z gene multiplication speeds, which suffered from the fact that (like for their other considerations) Malmstrøm et al. considered all U and Z genes as identical mathematical units1. For such speed calculations U and Z genes should have been studied separately, and it also should have been realized that whereas from some U or Z genes multiple new copies were generated, others were lost in accordance with the “MHC gene birth and death” model3. Lastly, even if, regardless of the discussable calculations for speeds and optimum numbers, there is a positive association in neoteleost fish between speciation rates and U+Z α3 fragment numbers (see their Figure 3), then still their model which considers MHC genes as “speciation genes that promote rapid diversification”1 would be implausible in regard to cause and effect. Namely, in most species, there is a strong evolutionary pressure to maintain old allelic variation within MHC genes (trans-species polymorphism3,4,9), which, if anything, is likely to slow down speciation rates because it increases the required size of the founder population9. If old allelic or haplotype variation can’t be maintained because of rapid speciation through small founder populations, it can be speculated that a species might benefit from an enhanced capacity for the creation of new MHC allelic and/or haplotype variation by duplications/deletions and recombination10 between a high number of linked MHC gene copies. However, in that scenario it wouldn’t be the MHC organization which drives the speciation rate, as suggested by Malmstrøm et al.1, but the other way around.\n\n\nData availability\n\nThe data analyzed in this study are publicly available. Details are explained in Supplementary File 1, Supplementary File 2 and Supplementary File 3.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary Table S1: Examples of sequence reads of major histocompatibility complex (MHC) class II system genes found in single read archive (SRA) datasets published by Malmstrøm et al. for Gadiformes and closely related fishes.\n\nClick here to access the data.\n\nSupplementary File 1: List of sequence reads in SRA datasets of Gadiformes published by Malmstrøm et al. that match with major histocompatibility complex (MHC) class II system genes.\n\nClick here to access the data.\n\nSupplementary File 2: Investigation of unitigs with (partial) major histocompatibility complex (MHC) class II system genes which are listed by Malmstrøm et al. in their Table S7 for the non-gadiform fishes S. chordatus, C. roseus, Z. faber, T. subterraneus, P. transmontana, and P. japonica.\n\nClick here to access the data.\n\nSupplementary File 3: Detailed explanation of Table 1.\n\nClick here to access the data.\n\n\nReferences\n\nMalmstrøm M, Matschiner M, Tørresen OK, et al.: Evolution of the immune system influences speciation rates in teleost fishes. Nat Genet. 2016; 48(10): 1204–10. PubMed Abstract | Publisher Full Text\n\nBorghans J, Keşmir C, de Boer RJ: MHC diversity in Individuals and Populations. In: Flower D, Timmis J, editors. In Silico Immunology. Springer, New York NY; 2007; 177–195. Publisher Full Text\n\nNei M, Rooney AP: Concerted and birth-and-death evolution of multigene families. Annu Rev Genet. 2005; 39: 121–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrimholt U, Tsukamoto K, Azuma T, et al.: A comprehensive analysis of teleost MHC class I sequences. BMC Evol Biol. 2015; 15: 32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMalmstrøm M, Jentoft S, Gregers TF, et al.: Unraveling the evolution of the Atlantic cod's (Gadus morhua L.) alternative immune strategy. PLoS One. 2013; 8(9): e74004. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNonaka MI, Aizawa K, Mitani H, et al.: Retained orthologous relationships of the MHC Class I genes during euteleost evolution. Mol Biol Evol. 2011; 28(11): 3099–112. PubMed Abstract | Publisher Full Text\n\nMiller KM, Li S, Ming TJ, et al.: The salmonid MHC class I: more ancient loci uncovered. Immunogenetics. 2006; 58(7): 571–89. PubMed Abstract | Publisher Full Text\n\nSchümann J, Pittoni P, Tonti E, et al.: Targeted expression of human CD1d in transgenic mice reveals independent roles for thymocytes and thymic APCs in positive and negative selection of Valpha14i NKT cells. J Immunol. 2005; 175(11): 7303–10. PubMed Abstract | Publisher Full Text\n\nKlein J, Sato A, Nikolaidis N: MHC, TSP, and the origin of species: from immunogenetics to evolutionary genetics. Annu Rev Genet. 2007; 41: 281–304. PubMed Abstract | Publisher Full Text\n\nDoxiadis GG, de Groot N, Otting N, et al.: Haplotype diversity generated by ancient recombination-like events in the MHC of Indian rhesus macaques. Immunogenetics. 2013; 65(8): 569–84. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "35833",
"date": "19 Jul 2018",
"name": "Anthony B. Wilson",
"expertise": [
"Reviewer Expertise Adaptive Immunity",
"Evolutionary Biology",
"Speciation"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nDijkstra & Grimholt present a critical analysis of Malmstrom et al.'s 2016 Nature Genetics article1, which investigated the evolution of MHC I and II loci in gadiform fishes using a low coverage genomic screen of 66 species, inferring a link between adaptive immune evolution and speciation rates in this group. Dijkstra & Grimholt’s criticisms are wide ranging – I deal with each of their major areas of concern below:\nI. MHC Class II loss in gadiform fishes. The authors highlight two serious flaws in the Malmstrom analysis, demonstrating that the original dataset contains sequence reads of MHC II and associated loci in several species that were overlooked in the original analysis. Equally importantly, datasets for several of the outgroup taxa lack these genes, raising questions concerning the reliability of the underlying data. Malmstrom et al.'s genomic screen is understandably low coverage given the taxonomic breadth of their survey, but I agree with Dijkstra & Grimholt that based on the existing evidence, one cannot confidently infer the timing of MHC II gene loss in this group.\nII. MHC I allele counting strategy. Djikstra & Grimholt challenge the allele counting strategy used by Malmstrom et al, particularly their focus on U & Z loci (teleost fish have at least 5 different MHC I lineages2), based on their assumption that these loci are chiefly involved in binding peptide ligands. While I agree that grouping U & Z loci together simplifies their known functional complexity (I was rather confused by this approach myself when reading the original paper), here I feel that Djikstra & Grimholt could be more constructive in their criticism. At present, its not entirely clear what type of analysis they would feel would be most suitable. I would also suggest providing slightly more context on the study system to assist readers who may be unfamiliar with the original work.\n\nWhile Dijkstra & Grimholt have elsewhere provided compelling evidence that Z loci may have a very different function, its not clear whether they’re suggesting that Malmstrom et al. should have focused solely on U loci, or whether it would have been more appropriate to include all MHC lineages in their analyses. Either way, I would have liked to see whether analyzing the data in the manner preferred by the authors would impact the conclusions of the original article.\nI agree that experimental evidence would be necessary to conclusively demonstrate a link between allelic diversity and function, but given the taxonomic breadth of Malmstrom et al.'s study, surely they wouldn’t expect experimental evidence for all species included in the original study - How much experimental evidence would they deem sufficient? At present, its not clear whether they’re simply suggesting that Malmstrom et al. should have been more circumspect in their conclusions, or whether they feel that the results of the analysis are entirely unreliable. Clarification of this point is essential.\nIII. Testing the relationship between MHC allelic diversity and speciation rates in gadiform fishes. The authors raise concerns on the modelling approach used by Malmstrom et al., including their combined analysis of U and Z loci (see above), and their lack of a biologically realistic model of gene evolution, incorporating MHC gene gain and loss3 – I agree with these criticisms. I do, however, take some issue with their contention that Malmstrom et al.'s hypothesis is wholly invalid. While there is indeed strong evidence of trans-species MHC polymorphism in some well-studied vertebrate lineages, this does not invalidate an experimental test of an alternative hypothesis. If Dijkstra & Grimholt feel that Malmstrom et al. have their hypothesis “the wrong way around”, are there any data/analyses that could convince them otherwise?\n\nIs the rationale for commenting on the previous publication clearly described? Yes\n\nAre any opinions stated well-argued, clear and cogent? Yes\n\nAre arguments sufficiently supported by evidence from the published literature or by new data and results? Partly\n\nIs the conclusion balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "3946",
"date": "06 Sep 2018",
"name": "Johannes M. Dijkstra",
"role": "Author Response",
"response": "Dear Dr. Anthony B. Wilson, Thank you for your review and support of our article. We appreciate that an expert such as you is willing to join the public debate so that erroneous/unsubstantiated messages like the ones presented by Malmstrøm et al. can not take hold in our field. You suggest us to provide lay readers with more background information. Following your suggestion, we have tried to do so, by writing a tentative introduction section, but decided not to use it as we found that the necessary size and discussions would distort the article too much. This article is a correspondence, a discussion about another paper, and we feel that this discussion character should be clear throughout the article. Furthermore, F1000Research proscribes that “Correspondence articles are short, peer reviewed comments”, and, as it is, the article is already quite lengthy. You ask us to be more constructive in our criticism towards Malmstrøm et al., and to explain to them what they should have been doing instead. First of all, we would have liked them to make their claims on MHC class II system genes solid, because that seems within close experimental reach, and we believe that they should have concentrated their article on that topic. In regard to MHC class I, we would have liked them to either study those genes intensively, or to have refrained from any modeling, and especially to have refrained from highlighting a resulting model in the title. In the supplementary files 1 and 2 we already made some detailed comments about which experiments would be necessary to make the MHC class II system claims more solid and within acceptable standards. Now we also added a conclusion section to the main text which explains, in general terms, what we would like Malmstrøm et al. to do or not to do as follows: “Conclusion “Malmstrøm et al.1 used low-coverage genome sequencing for comparison of 66 mostly neoteleost fish, and so helped with elucidating their phylogeny. They found that intact MHC class II system genes may be completely absent in Gadiformes, and believed that related non-gadiform fishes have intact MHC class II system genes. However, their genomic databases were incomplete and in the case of many Gadiformes spiked with reads from MHC class II system genes that may or may not be contaminations, so that final conclusions require some additional analysis of at least a few species at the gadiform/non-gadiform clades border. We suggest that they need to perform a number of PCR and sequencing experiments to clarify this matter. When comparing class I and class II situations in their investigated neoteleosts, Malmstrøm et al.1 also found that their earlier theory, which was that the absence of an MHC class II system might explain the high number of MHC class I genes in Atlantic cod13, could not be corroborated. Instead, solely based on estimations of U+Z a3 fragment numbers, they1 proposed a new theory on MHC class I evolution which they referred to in their manuscript title. We hope to have shown sufficiently that their conclusions on MHC class I evolution were unsubstantiated, that estimation of U+Z a3 fragment numbers is not a proper way to analyze MHC functions or MHC evolution, and that, apart from not investigating logical units that are better suited for their methods of modeling, also the number estimations and modelling systems used by Malmstrøm et al.1 were flawed and/or non-trustworthy. Before any meaningful discussion can be started about the evolution of MHC class I genes in neoteleosts, a much higher level of information about sequences and genomic positions is necessary.” As to your question how we would have addressed the issue of MHC numbers/variation and thymic T cell depletion. The answer is that we probably wouldn’t try. Meaningful modeling would require a better understanding of how positive and negative T cell selections in the thymus, involving multiple different MHC molecules, contribute both quantitatively and qualitatively to the T cell pool. Regardless of the thymic selection model, we might be interested in MHC class I gene numbers, but then we would first want to separate the question into functional MHC subclasses. For example, it may be interesting to see whether there is some evolutionary pattern in the number of classical type polymorphic MHC class I genes (a question which Malmstrøm et al. erroneously seem to think that they were addressing), in the number of genes of nonclassical MHC class I families, or the number of MHC class I pseudogenes. Possibly, selection for increased diversification speeds may select for unstable haplotypes with many tandemly organized genes and pseudogenes that can function as a recombination reservoir. It would be interesting to see whether there are differences in numbers of MHC class I pseudogenes between those fish species that more stably maintained ancient variation and those that more rapidly acquired new variation. Basically, we would try to get good information on all investigated genes (and not just count a3 fragments), and then would try to answer questions one at the time and directly connected to the data (and not try to make an unsubstantiated overarching model). Without good data, we simply wouldn’t start modeling. I hope our above answer, and our extended criticism in the article main text of the models used by Malmstrøm et al., is also sufficient as a response to the issues that you raised in paragraph III of your review. Sincerely, also on behalf of Dr. Unni Grimholt Hans (J.M.) Dijkstra"
}
]
},
{
"id": "36541",
"date": "27 Jul 2018",
"name": "Brian Dixon",
"expertise": [
"Reviewer Expertise Fish Immunology",
"MHC genes",
"antigen presentation"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe critique of Malstrøm et al. presented here makes some very valid points that are well supported by the literature.\nIt has long been true in fish MHC research that the fact that a gene has not been reported to be present in a particular species does not mean that is not present. Modern genomics techniques has presented better proof for this assertion, with the lack of an MHCII/CD4 pathway in gadids being the most prominent example, but even modern genomics techniques are not iron clad 100% proof and should be checked very carefully before definitive statements are made. Thus the comments about verifying the presence or absence of specific genes in the numerous species by other means is valid.\nAdditionally, the treatment of all U and Z genes as identical units while ignoring the allelic diversity of each gene within those classes is indeed a serious flaw in the reasoning of Malstrøm et al. There is significant variability in diversity U gene families which will have differing effects on T cell selection that simply counting gene numbers will not address.\nDijkstra and Grimholt's critique should be carefully read and addressed.\n\nIs the rationale for commenting on the previous publication clearly described? Yes\n\nAre any opinions stated well-argued, clear and cogent? Yes\n\nAre arguments sufficiently supported by evidence from the published literature or by new data and results? Yes\n\nIs the conclusion balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "3947",
"date": "06 Sep 2018",
"name": "Johannes M. Dijkstra",
"role": "Author Response",
"response": "Dear Dr. Brian Dixon, Thank you for your review and support of our article. We appreciate that an expert such as you is willing to join the public debate so that erroneous/unsubstantiated messages like the ones presented by Malmstrøm et al. can not take hold in our field. Sincerely, also on behalf of Dr. Unni Grimholt Hans (J.M.) Dijkstra"
}
]
},
{
"id": "36540",
"date": "06 Aug 2018",
"name": "Jerzy K. Kulski",
"expertise": [
"Reviewer Expertise MHC genes",
"retrotransposons",
"evolutionary genomics and biology",
"population diversity."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe correspondence by Dijkstra & Grimholt1 provides critical concerns about a publication by Malmstrøm et al in Nature Genetics in October 20162, concluding that their new model of MHC class I evolution, reflected in their title “Evolution of the immune system influences speciation rates in teleost fish”, is unsubstantiated. I concur with their three main criticisms “(1) that the authors did not provide sufficient evidence for presence or absence of intact functional MHC class I or MHC class II system genes, (2) that they did not discuss that an MHC subpopulation gene number alone is a very incomplete measure of MHC variance, and (3) that the MHC system is more likely to reduce speciation rates than to enhance them.”\n\nAll three critical points are well founded and stand-alone without too much need for further support. However, I have added the following 14-point commentary for Dijkstra & Grimholt1, Malmstrøm et al (2016)2 and others to consider and elaborate on if they would like to because they are important concerns in the field of MHC genomics, biological function and evolution.\n\n1. According to Dijkstra & Grimholt1 “the MHC counting strategy by Malmstrøm et al.2 should be deemed incomplete and far too simplistic. For their number determination Malmstrøm et al.2 solely relied on estimation of U plus Z lineage genomic α3 exon fragment numbers, despite that the typical “birth and death” mode of MHC evolution can produce many pseudogenes.” I agree that this is a major problem with the Malmstrøm et al (2016)2 paper, one that also omits the other categories of the MHC I such as the L, S and P lineages that might contribute to a much large number of MHC I-like genes. In this regard, it seems that Malmstrøm et al.2 have taken only the genomic exon fragment numbers of the MHC I U and Z lineages to represent the entire immune system of their title.\n\n2. Dijkstra & Grimholt1are also right to point out that low coverage sequencing by next generation sequencing (NGS) can result in the artifactual loss of genes that in turn can lead to misleading or incorrect conclusions when counting for gene copy numbers or looking for gene losses. Malmstrøm et al (2016)2 sequencing coverage was 9 to 39x and they recovered only about 75% of the conserved eukaryotic genes. Therefore, in this situation, the de novo low coverage sequencing data should not have been used as evidence for the absence of genes from the genome without providing a properly organized high coverage map of genomic assemblies to show where the sequences are missing in the genomes. The reviewers and editors of the Malmstrøm et al (2016)2paper should have been aware of this basic problem of using low coverage NGS, particularly with respect to looking for a few needles in a haystack. For a better understanding of the advantages and disadvantages of MHC genotyping and haplotyping by NGS see the review by Shiina et al (2016)3.\n\n3. Malmstrøm et al (2016)2 reported that there was no overall correlation between the combined MHC copy numbers and the HOX gene copy numbers that were used as a control. As already pointed out by Dijkstra & Grimholt1, the U and Z genes should have been analysed separately. Nevertheless, it would have been interesting to see how these duplicated HOX development genes, which also have been implicated in driving speciation, compared with the properly classified duplicated MHC class I adaptive immune genes (separate Z, X, L, S and P lineages) at the classical and non-classical level in Fig 3 during speciation rate simulations2. In addition, it appears from Fig S3 that Malmstrøm et al (2016)2 might have missed an inverse relationship between MHC & HOX for the MHC copy numbers up to 25 and direct correlation between MHC & HOX for the MHC copy number from 25 to 50.\n\n4. It seems absurd to count up only the short sequences of a3 fragments from a low coverage sequence library and extrapolate the numbers counted in Fig 2b to reconstruct an artificial model for duplicated MHC gene copies influencing speciation or evolution without first knowing their categories (classical versus non-classical presentations), functions, overall structure and coding ability, transcriptional activity and genomic locations. Malmstrøm et al (2016)2 provided no properly organized genomic assemblies or genomic gene maps and no information about genomic distribution of the MHC I or II sequenced fragments or the duplication mechanisms involved. If they had done so, they might have added important information to better assess and place the threshold MHC I copy numbers and gene distributions into some sort of genomic perspective4. More reliable models for the evolution of MHC class I genomic duplications might be achieved by providing duplication gene maps and the phylogenic relationships of the duplicated gene sequences showing likely duplication mechanisms, where and how these genes are located relative to each other, and how the genomic structures have changed in a comparative analysis between species. See Kulski et al. (2004)5 for an example of one such duplication model. Mapping with phylogeny is a more informative approach than just constructing phylogenetic trees using one or more single exonic sequences from a limited number of each species and then claiming that the changes in copy numbers influence the speciation rates of almost the entire number of extant fish species. Perhaps, the Malmstrøm et al (2016)2 low coverage sequence libraries could still be used effectively to reconstruct full-length gene structures and targeted genomic regions that harbour multiple copies of the MH(C) genes in a comparative analysis.\n\n5. The multiple-optima Ornstein Uhlenbeck (OU) model6. (a) According to Malmstrøm et al (2016)2 the multiple-optima OU model vastly outperformed alternatives such as Brownian motion, white noise, single-peaked OU and early-burst models. This finding corroborated their hypothesis that MHC I copy number evolution is characterized by selection toward intermediate optima, resulting from a tradeoff between detection and elimination of pathogens. Presumably, the authors preferred the OU multiple-optima model to the other models because it supported rather than falsified their hypothesis. Of course, this highly artificial computing model did not detect a tradeoff between detection and elimination of pathogens, this would be the authors’ own biological hypothesis and bias. The OU model intrinsically sets optima (biases) according to its built-in algorithm6, and this is one of the main objections to the use of this prediction model. The OU model artificially generates bias because its purpose is to find the trait optimum that stabilizes selection6. The misguided conclusion by Malmstrøm et al (2016)2 in using the OU model is that the trait optimum influences speciation.\n\n(b) Interestingly, Malmstrøm et al (2016)2 did not directly test the opposing hypothesis that the MHC I copy number evolution is not characterized by selection toward intermediate optima, and does not result from a tradeoff between detection and elimination of pathogens. Possibly, their best control in this regard was the simple Brownian model that did not work as well for them as the multiple-optima OU model that has extra parameters such as the addition of an overall optimum trait value to which all lineages are attracted. Other evolutionists often prefer the Brownian model for the reason that it is a simple, neutral model without the added bias of creating optimum trait value.\n\n( c ) The OU multiple-optima model is not a fool-proof algorithm, and a number of evolutionists believe that it can be an unreliable or misleading model. According to Cooper et al (2016)7, although widely used, the properties of the OU model, and other direct extensions of the Brownian model, are poorly understood leading to the potential for inappropriate use and misinterpretation of results. In particular, Cooper et al (2016)7 used computer simulation studies to demonstrate that the single peaked OU model error rates are unacceptably high when tree size is small (< 200 species tips), when likelihood ratio tests or Akaike information criterion (AICc) are used to select the best model, and when measurement errors are introduced into the data. They also showed that when the alpha parameter of trait evolution was extremely small (<1) in the OU model it was indistinguishable from Brownian motion, and as the alpha value became larger it favoured OU prediction models, until the larger values of alpha were indistinguishable from white noise and it was therefore independent of phylogeny. The alpha values for the Malmstrøm et al (2016)2 model selection analysis were markedly less than one (Supplementary Table 13), suggesting that they could have accepted the Brownian model over the OU model as the better model fit.\n\n6. The BiSSE threshold model. Malmstrøm et al (2016)2 carried out binary state speciation and extinction (BiSSE) analysis to estimate differences in diversification rate between lineages with high and low MHC I copy numbers. They found that diversification rates based on correlation estimates differed most when the threshold was placed between 20 and 25 copies (Fig. 3c). With a threshold in this range, the model with two separate speciation rates for lineages with high and low copy numbers was statistically better supported than a model with a single speciation rate parameter. On this basis, they concluded that, ‘These results suggest that the influence of MHC I genes on speciation rates is stronger in species that have already evolved at least 20 copies.’ In comparison, the number of MHC I gene copy numbers in humans (excluding haplotype differences) is approximately 18 genes; 6 classical and non-classical MHC I genes, 5 CD1 genes, and 5 PHFZ genes (MICA, MICB, MR1, HFE, Zn-A2-GP, etc). Thus, in comparison to some fish species, humans are diversifying along very nicely as a ‘diversified’ species approaching the ‘magical’ threshold of between 20 and 25 MHC I copies.\n\nIt is noteworthy that Maddison et al (2007)8 highlighted the following assumptions that need to be taken into account when using their BiSSE package. For the BiSSE model analysis none of the characters associated with speciation rates can be said to be causing or influencing evolution, even if Maddison et al (2007)8 write, ‘the correct conclusion given a significant result using our method is that the character examined or a codistributed character appear to be controlling diversification rates.’ At best, the binary character state is an association, at worse a misleading one. Maddison et al (2007)8 provided the following cautions and assumptions about the likelihood of the BiSSE (binary-state speciation and extinction) model:\n\n(a) the transitions happen instantaneously over the time scales considered (i.e., ignore periods of time during which a species is polymorphic). (b) these events are independent of one another; in particular, the character state change does not, in and of itself, cause speciation (or vice versa). (c) an accurate rooted phylogenetic tree with branch lengths is known (the \"inferred tree\") and the character state is known for each of the terminal taxa. (d) the tree is assumed complete: all extant species in the group have been found and included.\n\n(e) all terminal taxa are contemporaneous, and the tree is ultrametric (i.e., the total root-to-tip distance is the same for all tips).\n\nI’m not convinced that Malmstrøm et al (2016)2 considered or accepted these constraining assumptions when using BiSSE modeling.\n\n7. Speciation and diversification rates. (a) In the study by Santini et al (2009)9, the speciation rates within the Percomorpha clade were calculated to be at least ten times greater than in the Gadiformes order. Yet, according to Malmstrøm et al (2016)2, there were fewer than 20 copies of the U genes for each of the 5 species in the Perciformes clade, compared to more than 20 copies and up to 100 copies of the U genes in 16 of 30 species in the Gadiformes order (Fig 2b, Malmstrøm et al 20162). The use of only 5 species in the Perciformes order is a gross underestimate of the ten thousand or more species found in that order9. Moreover, in the Gadiformes order, there were closely related species (n=16) with > 20 U genes and different groups of closely related species (n=14) with < 20 U genes. Again, the number of species that were sequenced in the Gadiformes order is grossly under-represented. The Gadiformes comprises 10 families and more than 600 species9, whereas Malmstrøm et al (2016)2 sequenced only 27, i.e., 27 x 100/600 = 2,700/600 = 4.7% analysis of extant sequence, a percentage that is simply not good enough to support their extravagant conclusions. Is < 5% of the 600 species really representative of the Gadiformes. Malmstrøm et al (2016)2 have to be more temperate with their conclusions using such a small representative sample.There are clear inconsistencies with MHC I a3 fragment copy numbers in the Gadiformes order. The MHC I a3 frag copy numbers are low (<20) for Moridae and M. occidentalis in Macrourinae, and for Phycinae, Lotinae, and three species in Gadinae. Five species of Gadinae have between 20 and 40 copies. On the other hand, Bregmacerotidae, Merlucciidae, Melanonidae, Muraenolepidodae, and Trachyrincinae have MHC I a3 fragment copy numbers between 50 and 100. The threshold levels (20 to 25) are all over the place. Moreover, the lineage, genomic block duplication and hitchhiking (linkage) effects on MHC gene duplications (8 to >100 copies) in the Gadiformes have not been taken into account in the analysis of speciation rates (Fig 3, Malmstrøm et al 20162), and, therefore, make the entire analysis unreliable.\n\n(b) “Diversification rate analyses were calculated on the basis of the time-calibrated phylogeny and counts of species richness in each of the 37 mutually exclusive clades of teleost fishes\"2; mostly from the Gadiformes order. The MHC I speciation model of Malmstrøm et al (2016)2 appears contradictory for the Perciformes (10,033 species) that have speciation rates 18 times greater than Gadiformes (555 species)9 and yet the MHC I a3 fragment copy numbers are at least two times lower in Perciformes than Gadiformes (Fig 2b, Malmstrøm et al 20162). Also, the Anabantiformes have 252 species – a species rate 40 times lower than Gadiformes9 and yet their MHC I a3 fragment copy numbers are at least two times higher than in Perciformes.\n\n(c) Considering that there are more than 29,000 species of teleost fishes9, a highly limited analysis by Malmstrøm et al (2016)2 of using a sample group of less than 0.2% of the available extant species cannot be considered to be statistically, taxonomically or biologically significant or sufficiently reliable to conclude that, “Evolution of the immune system influences speciation rates in teleost fish”2. What does a species half-life of 25 million years mean in the context of 29,000 species of teleost fishes? If the multiple-regime OU model is wrong, highly biased or misinterpreted then does it validate or support the overall hypothesis of Malmstrøm et al (2016)2? Also, what does an optimal trait actually mean in the context of 29,000 species? If, a suboptimal number of MHC I copies are detrimental to a species, then how have divergent species managed to survive for so long with a half-life of 25 million years of adaptation? Also, if, as Malmstrøm et al (2016)2 say, ‘Such gene family expansions may promote biological diversification by introducing new raw genetic material, potentially resulting in sub- or neofunctionalization and thus novel immunological pathways.’, then which of the non-optimal (greater than or less than the threshold of 20 -25 copies) MHC I genes are detrimental to the species? In this regard, there must be a gradation of functionally good and bad MHC I genes as their copy numbers approach the threshold (optimally good) and then deteriorate beyond it. Is this assumption of a MHC I copy number functional trait value as a quantitative marker of speciation at all testable?\n\n8. In their discussion, Malmstrøm et al (2016)2 referenced the hypothesis of T cell depletion and hybrid fitness by Eizaguirre et al (2009)10 and concluded that,\"Our analyses identify this threshold at 20–25 MHC I copies, suggesting that the effect of T cell depletion on hybrid fitness becomes more pronounced in this range and that this might affect mate choice in species with copy numbers above this threshold, promoting inbreeding and reinforcement.\" Eizaguirre et al (2009)10 suggested that, “Super-optimal individual MHC diversity should be a common disadvantage for species hybrids in vertebrates, resulting in elevated parasite loads.” In this regard, if high copy numbers of the MHC class I genes leads to hybridization and loss of the immune system as inferred by Eizaguirre et al (2009)10, then this more than likely would lead to extinction of populations and species. Extinction would be the most extreme and bizarre form of immune system influence on speciation rates. Furthermore, it is extremely speculative for Malmstrøm et al (2016)2 to say that high copy numbers of the MHC class I genes with copy numbers above the threshold of 20 to 25 copies, promotes inbreeding and reinforcement, because, in fact, there is no such evidence for it. A more reasonable hypothesis is that high copy numbers of linked MHC class I genes, such as in rhesus macaque, or the mouse11, or the cod12, might benefit the species to better adapt to microbial inhabitants in a greater variety of geographical environments, although the evidence for this is tenuous as well. Despite ongoing debates, the selective advantage of MHC diversity in host-pathogen coevolution might not be easily resolved (at the macroevolution level) because of the constant number of insults by large numbers of pathogens in the life-time of an individual organism, population or a species and the arms race or Red Queen effect. Studies on extant species will always discover an example of a pathogen associated with a polymorphic MHC gene that might favour selective advantage for host-pathogen coevolution, whereas the pathogen that caused the extinction of a species is rarely or never found. To conclude that the immune system (that is, different copy numbers of the class I MHC genes2) influences speciation rates, it would have to be shown that the immunity gene products can commonly create reproductive barriers or genetic incompatibilities among populations that permit the maintenance of the genetic and phenotypic distinctiveness of these populations in geographical proximity13; and this was not done2.\n9. Malmstrøm et al (2016)2 did not provide any reliable evidence to support their speculation that evolution of the immune system influences speciation rates in teleost fish or that increasing MHC I diversity facilitates speciation7. Instead, Malmstrøm et al (2016)2 used their limited data and analyses using speculative models to jump to highly unsupported conclusions and quickly position the cart before the horse. Dijkstra & Grimholt1 pointed out that the Malmstrøm et al (2016)2 title “Evolution of the immune system influences speciation rates in teleost fish”, is unsubstantiated, and that their hypothesis seems to be “the wrong way around”. It should have been, “Speciation (rate) influences the evolution of the immune system in teleost fish.” Or, “Speciation rates are associated with diversity of MHC class I genes in teleost fishes”, which perhaps is too obvious and underwhelming. This is not simply the chicken or the egg causality dilemma; in fact, the change in title is better supported by the literature and the established theories of MHC genomic evolution in vertebrates4. However, because it is less “sexy” and controversial than the original title, it might not have been so readily published.\n\n10. A large number and variety of genome-wide gene duplications have been associated with speciation13, that is, genomic gene duplications are not limited to only class I MHC genes. If MHC I gene duplications effect or affect speciation, how do the other hundreds of gene duplications contribute to speciation rates? Also, do sequence variants or mutations in non-duplicated genes have any influence on speciation rates? It seems absurd to pick on only one group of gene duplications (e.g., MHC class I genes2) as those that are responsible for speciation and ignore all the others as an inconvenience. For example, a relatively recent comparative genomic study revealed how genomes change with speciation in an examination of genomes from five cichlid fish species, an ancestral lineage from the Nile, and four species from the East Africa lakes, Tanganyika, Malawi, and Victoria14. Compared to the ancestral Nile lineage, the East African cichlid genomes had many alterations in regulatory elements, accelerated evolution of protein-coding elements in genes for pigmentation, an excess of gene duplications, and other distinct features that affect gene expression associated with transposable element insertions and novel microRNA. Each species also contains a reservoir of mutations different from the other species14. Much of the diversity between the cichlid fish species evolved in a nonparallel manner often rapidly due to sexual selection and genetic conflicts between males and females or between different regions of the genome at a regulatory level14 rather than by the slower and weaker forces of classical natural selection13. If sexual selection and genetic conflict at the genomic regulatory level are the prime movers of speciation rate, it is difficult to conclude that the variable diversity of a few MHC gene copies are responsible for speciation as well as for the many other associated genomic changes associated with speciation.\n\n11. Malmstrøm et al (2016)2 informed us in the introduction section of their publication that \"Our results highlight the plasticity of the vertebrate adaptive immune system and support the role of MHC genes as ‘speciation genes’, promoting rapid diversification in teleost fishes.\" MHC class I gene copy number variability occurs across many different species, families, orders and domains. Because there is such enormous variability in MHC class I gene copy number for hundreds4 or possibly even thousands of different chordate species, it is not possible to conclude meaningfully that the expansion of MHC class I genes provides an undefined advantage of one species over another. For example, the great apes (humans, chimpanzees, gorillas and orangutans) have about six functional MHC class I genes, whereas the old and new world monkeys often have up to 15 or more4,11. Is this evidence that the MHC class I genes influence the rate of speciation in primates? And if so, what does that really mean in the whole scheme of things? How do the species with low copy numbers of MHC class I genes survive so well over millions of years without the presence of another 90 to 100 copies of MHC class I genes? This question is often neglected, and yet it is important for a better understanding of the function and evolution of MHC genes between and within the vertebrate species.\n\n12. Taxonomic and lineage markers. Mutations, indels and duplications drive diversity and evolution. However, most mutated genes within species and their families do not create or influence speciation rates in the sense that Malmstrøm et al (2016)2 use the term, ‘speciation genes’. In comparative genomics and their sequence relationships between different species, most genomic sequences range between newly derived genes and the ultraconserved or the essential core coding and noncoding genes with varying amounts of sequence differences. Some genic and nongenic sequences such as the MHC genes and retrotransposons are highly polymorphic and therefore are useful taxonomic markers at the individual, population, species and broader lineage levels. The MHC gene sequences clearly are one of these useful taxonomic or lineage markers along with olfactory receptors, immunoglobulins, globins, HOX, TOLLs, KIRs, mitochondrial DNA, ribosomal RNA sequences and thousands of others that can be used comparatively in the phylogeny to undertake an examination of the accuracy and reliability of current taxonomical rankings and sister lineages. However, because many thousands of coding and noncoding genes (or sequences) are variants (polymorphic) or vary in copy numbers, we do not immediately or easily imply that all or some of them are responsible for speciation without providing further concrete evidence. This kind of extrapolation without the burden of proof is absurd and wrong. Similarly, to say that the polymorphisms demonstrate natural selection as if natural selection was a biological or molecular mechanism is meaningless without showing experimentally how these polymorphisms benefit or disadvantage the organism over all the other different polymorphisms.\n\n13. On the basis of either a priori or a posteriori reasoning, the immune system obviously affects the wellbeing of individuals and populations, but whether it can be extrapolated to speciation events and speciation rates remains highly dubious and most probably unlikely. It seems too farfetched to blame MHC class I genes with high copy numbers over threshold levels of promoting inbreeding and reinforcement2 because this in turn could create hybrid inviability or sterility resulting in postzygotic isolation. Although the population conditions in many models of rapid speciation do favour inbreeding and/or hybridization13, none of the teleost species tested by Malmstrøm et al (2016)2 were shown to be either inbreeding or in postzygotic isolation. The factors responsible for either prezygotic or postzygotic isolation are likely to be independent of the adaptive immune system, although zealots might argue otherwise. Hybridization between diverging lineages in post-zygotic reproductive isolation can trigger genome instability. For most animals without an adaptive immune system and for plants without a MHC, speciation depends on the shrinkage, expansion and equilibrium (e.g., aneuploidization and dysploidy) of the genome and the containment and functionality of all the essential genomic information to develop an optimal balance between stability and plasticity within the organism in order to first survive and then propagate and expand itself as a new species13. In those rare and ‘traumatic’ transitional situations, there is no need for particular ‘speciation’ genes such as variable copies of the class I MHC genes to influence speciation. The rarely observed transition from population ‘trauma’ to a new speciation event depends on an array of totally different factors for creating postzygotic isolation events including interbreeding between semi-isolated populations and an elaboration on the existence of stress-induced changes in chromosomal and ploidy integrity both in hostile and non-hostile environments.\n\n14. Finally, Malmstrom et al (2016)2 admirably sequenced 66 teleost species by a next generation sequencing method and identified an array of MHC I and MHC II exonic fragments for phylogenetic and speciation analysis using the multiple-regime OU model to predict the optimal MHC I copy number as an evolutionary trait optimum affecting speciation. However, the conclusions of the paper by Malmstrom et al (2016)2 especially for the MHC I gene copy numbers are unreliable because they are based on far too many assumptions, speculations, contradictions, incomplete or missing data and unproven predictive models with little or no empirical evidence in support. Nevertheless, their simple, but controversial hypothesis is published, and now it is up to them and others to test its validity and \"consider plausible alternative hypotheses in a firm hypothesis-testing framework in which alternative hypotheses make clear [and sensible] predictions of emerging patterns that can be unambiguously associated with particular models.\"7\n\nIs the rationale for commenting on the previous publication clearly described? Yes\n\nAre any opinions stated well-argued, clear and cogent? Yes\n\nAre arguments sufficiently supported by evidence from the published literature or by new data and results? Yes\n\nIs the conclusion balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "3948",
"date": "06 Sep 2018",
"name": "Johannes M. Dijkstra",
"role": "Author Response",
"response": "Dear Dr. Jerzy Kulski, Thank you for your review and support of our article. We appreciate that an expert such as you is willing to join the public debate so that erroneous/unsubstantiated messages like the ones presented by Malmstrøm et al. can not take hold in our field. Your comments are very extensive and valuable, and we now refer the readers to them. Our special fields of scientific expertise are MHC genes and molecules, and in our first manuscript version we only concentrated on those. If there is no sufficient unity among the units used for mathematical modeling, that modeling, or the functional explanation of the resulting model, can never make sense. However, we now realized that for a large audience these MHC-specific issues might not be so clear, and that it is better to also address the questionable modeling methods used by Malmstrøm et al. Therefore, we now have added two paragraphs dedicated to this questionable modeling, titled “Detailed discussion of the use of the Ornstein-Uhlenbeck model by by Malmstrøm et al.1” and “Additional criticisms in regard to the modelling by Malmstrøm et al.1”. We realize that we use different language in regard to these topics than a theoretical biologist would use, but we hope that nevertheless we address the issues clearly and correctly. Sincerely, also on behalf of Dr. Unni Grimholt Hans (J.M.) Dijkstra"
}
]
}
] | 1
|
https://f1000research.com/articles/7-963
|
https://f1000research.com/articles/7-1418/v1
|
06 Sep 18
|
{
"type": "Software Tool Article",
"title": "ITSxpress: Software to rapidly trim internally transcribed spacer sequences with quality scores for marker gene analysis",
"authors": [
"Adam R. Rivers",
"Kyle C. Weber",
"Terrence G. Gardner",
"Shuang Liu",
"Shalamar D. Armstrong",
"Kyle C. Weber",
"Terrence G. Gardner",
"Shuang Liu",
"Shalamar D. Armstrong"
],
"abstract": "The internally transcribed spacer (ITS) region between the small subunit ribosomal RNA gene and large subunit ribosomal RNA gene is a widely used phylogenetic marker for fungi and other taxa. The eukaryotic ITS contains the conserved 5.8S rRNA and is divided into the ITS1 and ITS2 hypervariable regions. These regions are variable in length and are amplified using primers complementary to the conserved regions of their flanking genes. Previous work has shown that removing the conserved regions results in more accurate taxonomic classification. An existing software program, ITSx, is capable of trimming FASTA sequences by matching hidden Markov model profiles to the ends of the conserved genes using the software suite HMMER. ITSxpress was developed to extend this technique from marker gene studies using Operational Taxonomic Units (OTU’s) to studies using exact sequence variants; a method used by the software packages Dada2, Deblur, QIIME 2, and Unoise. The sequence variant approach uses the quality scores of each read to identify sequences that are statistically likely to represent real sequences. ITSxpress enables this by processing FASTQ rather than FASTA files. The software also speeds up the trimming of reads by a factor of 14-23 times on a 4-core computer by temporarily clustering highly similar sequences that are common in amplicon data and utilizing optimized parameters for Hmmsearch. ITSxpress is available as a QIIME 2 plugin and a stand-alone application installable from the Python package index, Bioconda, and Github.",
"keywords": [
"Amplicon sequencing",
"marker gene sequencing",
"internally transcribed spacer",
"ITS",
"trimming",
"QIIME"
],
"content": "Introduction\n\nThe internally transcribed spacer (ITS) between the small subunit (SSU/18S) ribosomal RNA gene and the large subunit (LSU/28S) ribosomal RNA gene is a commonly used phylogenetic marker. The Fungal Barcoding Consortium standardized the practice of ITS sequencing by adopting the region for its efforts (Schoch et al., 2012), and the major fungal database UNITE uses the region as well (Kõljalg et al., 2013). It is a common practice to amplify the ITS1 or ITS2 region using primers located in the more conserved 18S/5.8S genes or the 5.8S/28S genes. Previous work has shown that leaving these more conserved regions on the ITS sequence creates miss-assignments. In one study of full length ITS sequences, 11% of the time the ITS1 and ITS2 regions matched one reference sequence but the full sequence including ITS1, ITS2 and the 5.8S did not (Nilsson et al., 2009). The software package ITSx was developed and subsequently improved (Bengtsson-Palme et al., 2013; Nilsson et al., 2010) to accurately trim ITS sequences from longer reads. ITSx uses hidden Markov models (HMMs) created for fungi and 17 other groups of eukaryotes to identify the start and stop sites for the ITS region. The software used the HMMER package Hmmscan until version 1.1b when Hmmsearch was substituted for increased speed (Eddy, 2011).\n\nITSxpress was created to extend the capabilities of ITSx from marker gene studies using operational taxonomic units (OTUs) to studies using exact sequence variants. Amplicon sequencing creates sequences with errors. In order to distinguish true sequences from sequencing errors, sequences have been clustered into OTU’s by sorting reads by abundance then clustering them in a greedy fashion at a specified percent identity (often 97%). Recently, new methods (e.g. Dada2, Deblur and Unoise) have been published that use statistical models or information theoretic models to identify exact sequence variants that represent true biological sequences (Amir et al., 2017; Callahan et al., 2016; Caporaso et al., 2010; Edgar, 2016). These methods require the error profiles of individual sequences, which requires trimming each FASTQ sequence (Cock et al., 2010) to the ITS region of interest. ITSxpress trims FASTQ files for this purpose.\n\n\nMethods\n\nITSxpress rapidly merges and trims paired-end FASTQ sequences to the ITS region of interest for the identification of exact sequence variants. The software merges and error-corrects reads using BBMerge (Bushnell et al., 2017). The merged FASTQ reads are then sorted by abundance and clustered by default at 99.5% identity to generate a representative set of sequences using VSEARCH (Rognes et al., 2016). The user may also select dereplication from 98% to 100% identity. These unique sequences are compared to the HMMs used by ITSx version 1.1b (Bengtsson-Palme et al., 2013) using Hmmsearch (Eddy, 2011). Read filtering heuristics in Hmmsearch are enabled and reports are filtered as well. The start and stop position of each cluster representative is then used to trim each sequence in the cluster and all original FASTQ sequences that could be merged are returned with the ends trimmed. All major steps (merging, dereplication and Hmmsearch) are multithreaded. The source code is version controlled and tested by continuous integration.\n\nITSxpress is an open source Python package that can be run on Linux or MacOS systems and does not require any special memory or processor configuration. It is available from Github, Pip, Bioconda and as a plugin for QIIME 2. The QIIME 2 package operates on native QIIME 2 .qza files. A typical workflow for an ITS sequencing project would take a set of paired-end FASTQ forward and reverse sequences and return a FASTQ file with merged, trimmed sequences and a log file. Uncompressed FASTQ or Gzip compressed FASTQ files can be used. The command line version of ITSxpress accepts interleaved, paired-end files, forward and reverse paired end files, and single-ended files. The QIIME 2 plugin version of ITSxpress accepts a .qza QIIME 2 artifact file of the type “PairedEndSequencesWithQuality” or “SequencesWithQuality” that contains one or more samples with single or paired data. It merges (if paired) and trims all samples and returns a QIIME 2 artifact file containing single-ended sequences with quality or paired-end sequences with quality that can be used for sequence variant calling by DADA2 or Deblur (Amir et al., 2017; Callahan et al., 2016).\n\nTo compare the speed and trimming results of ITSxpress, we compared the ITS1 and ITS2 sequences from 15 soil samples collected from the rhizosphere of maize in fields with different winter cover crops. ITS1 reads were amplified using the ITS1F/ITS2 primer set (Gardes & Bruns, 1993; White et al., 1990). ITS2 reads were amplified using the ITS3/ITS4 primer set (White et al., 1990). Reads were multiplexed and sequenced on an Illumina Miseq in 2x300bp run mode using version 3.0 chemistry.\n\nTests of ITSxpress and ITSx performance were run on single compute nodes with 2 x 10 core Intel Xeon Processors (E5-2670 v2 2.50GHz 25MB cache) with hyper-threading enabled, 128GB DDR3 ECC memory and two Intel DC S3500 Series SATA 6.0Gb/s SSDs. For the first test of trimming speed, 5 replicates were run where 15 ITS1 and 15 ITS2 samples were trimmed using ITSxpress and ITSx with 4 logical cores. Trimming was done using ITSxpress with default settings. ITSx was run with multithreading and heuristic filtering turned on and only the fungal database selected. The running times for ITSx and ITSxpress were plotted on a log scale, Figure 1. The number of total reads in each sample and reads remaining after clustering at 99.5% identity are shown on a log scale, Figure 2.\n\nN=5 for each of the 30 samples.\n\nTo compare the performance of ITSxpress and ITSx as computer cores were added, tests were run on the ITS1 and ITS2 sample with the largest numbers of sequences (ITS1: n=100543 16% unique, ITS2: n=145499, 30% unique). The sample was processed 5 times with 1, 4, 8, 16, 30, and 40 virtual compute cores. The mean and standard error were plotted, Figure 3. Program settings were the same as in the first test.\n\nThe largest samples from the ITS1 (n=100,543) and ITS2 (n=145,499) datasets were selected for analysis. N=5 for each core/sample combination.\n\nThe trimming positions from ITSx and ITSxpress were compared for every ITS1 and ITS2 sequence. If a read was not trimmed identically by ITSx and ITSxpress, it was globally aligned and the start and stop positions were compared. Alignment was done using the Biopython Pairwise2 implementation of a global alignment function with the parameters (match score: 2, mismatch penalty: -1, gap opening penalty: -0.5, gap extension penalty: -0.1) (Cock et al., 2009).\n\n\nResults\n\nWhen using 4 cores, ITSxpress trimmed ITS1 region samples a median of 23 times faster (Bayesian 95% Highest Density Interval (HDI) 7 – 32) than ITSx, HDI interval calculated with the R package HDInterval (Meredith & Kruschke, 2018). ITSxpress trimmed the ITS2 region 14 times faster (95% HDI interval 8 – 24) than ITSx (Figure 1). Clustering at 99.5% identity reduced the number of reads used for Hmmsearch by a median of 71 times (95% HDI 17 – 95) for ITS1 and 36 times (95% HDI 21 – 52) for ITS2, Figure 2.\n\nGlobal alignment was used to compare the trimming results of ITSx and ITSxpress for reads that were not identical. When reads were clustered at 99.5% identity, the default behavior, ITSxpress and ITSx trimmed 99.822% (n=773021) of reads in the ITS1 region within 2 bases of each other and 99.099% (n= 782385) of reads in the ITS2 region within 2 bases of each other. When reads were dereplicated at 100% identity ITSxpress and ITSx trimmed 99.992% (n= 773019) of reads in the ITS1 region within 2 bases of each other and 99.864% (n=782582) of reads in the ITS2 region within 2 bases of each other.\n\n\nDiscussion\n\nITSxpress increases the trimming speed of ITS sequences by clustering reads and optimizing the parameters for Hmmsearch. Most of the decrease in running time is attributable to clustering. Clustering at 99.5% identity resulted in reducing the number of sequences by a median of 71 to 36 times for the ITS 1 and ITS2 regions. The time complexity for Hmmsearch on a single core is approximately linear with the number of sequences so decreases in the number of sequences significantly decrease running time. The time required for clustering varies, for dereplication at 100% identity VSEARCH uses the rapid CityHash64 function (Rognes et al., 2016). For clustering at less than 100% reads are sorted by abundance then clustered using greedy search. These steps take time but are faster than Hmmsearch and scale sub-linearly, resulting in median speed increases of 6-9x for the sequences dereplicated at 100% identity and 14-23x for the sequences clustered at 99.5% identity.\n\nBoth ITSx and ITSxpress use Hmmsearch, the same hmm models, and run using multiple cores. ITSxpress uses empirically tuned Hmmsearch heuristic values of 1×10-6 for F1, F2 and F3 which show increased speed and little loss of sensitivity. ITSx uses Hmmsearch’s default values of 1×10-2 for F1, 1×10-3 for F2 and 1×10-5 for F3 when the “--heuristics” flag is set.\n\nITSx and ITSxpress scale differently as cores are added. ITSxpress spends about half its time clustering when the clustering identity is below 100%, and for a typical ITS sample this reduces the number of sequences to be analyzed by Hmmsearch to the point where parallelizing Hmmsearch does not result in large speed gains. This trait is beneficial for users using laptop or desktop computers because they can trim a typical ITS sample in less than a minute using 1–4 cores. Both programs use Hmmsearch for the most computationally intensive part of their workflows. ITSx benefits from Hmmsearch parallelization up to about 10 cores but then the increases decline; the nonlinear scaling of Hmmsearch is noted in the HMMER User Guide. (Eddy, 2011).\n\nITSx and ITSxpress trim most sequences exactly the same. At 100% identity one in 12,500 ITS1 sequences and one in 735 ITS2 sequences differ by more than two bases. This may be caused by differences in the heuristic settings for Hmmsearch. At 99.5% identity clustering the differences are greater, with one in 560 ITS1 sequences and one in 110 ITS2 sequences differing by more than two bases. At 99.5% identity, sequences from 600-800bp can be 3bp different, but be clustered together. Substitutions do not affect the trimming position, but insertions or deletions do, accounting for some of the difference. The clustering identity can be set to as low as 98% identity to accommodate special uses but lowering the identity below 99.5% is not generally recommended since ITSxpress is quite fast even at 100% identity.\n\nITSxpress quickly merges reads and trims the selected ITS region from a range of amplicon samples. It trims FASTQ files allowing for the use of newer sequence variant methods of exact sequence clustering and is available as a command line application and as a plugin for QIIME 2.\n\n\nSoftware and data availability\n\nThe source code for the stand-alone version of ITSxpress version 1.6.1 used for this manuscript is available from: https://doi.org/10.5281/zenodo.1317575 (Rivers, 2018a). This software is available under the terms of the Creative Commons Zero \"No rights reserved\" data waiver (CC0 1.0 Public domain dedication).\n\nUpdated versions of the ITSxpress software are available from:\n\n- Github: https://github.com/USDA-ARS-GBRU/itsxpress\n\n- The Python Package index: https://pypi.org/project/itsxpress/\n\n- Bioconda: https://bioconda.github.io/recipes/itsxpress/README.html\n\nThe QIIME 2 plugin for ITSxpress is available from: http://doi.org/10.5281/zenodo.1317579 (Weber & Rivers, 2018); Github (https://github.com/USDA-ARS-GBRU/q2_itsxpress); and the Python Package index (https://pypi.org/project/q2_itsxpress/). This software is available under the terms of the CC0 1.0 Public domain dedication.\n\nThe computer code used to benchmark the software and generate the figures in this paper is available at: http://doi.org/10.5281/zenodo.1317585 (Rivers, 2018b); and Github (https://github.com/USDA-ARS-GBRU/itsxpress-paper). The code is also available under the terms of the CC0 1.0 Public domain dedication.\n\nData used in this study are deposited in the NCBI Sequence Read Archive under the accessions listed in NCBI BioProject Accession PRJNA483055.",
"appendix": "Grant information\n\nThis research was funded by the United States Department of Agriculture (USDA), Agricultural Research Service (ARS) research project number 6066-21310-005-00-D and computational analysis using SCINet under project 0500-00093-001-00-D. Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the USDA.\n\nThe opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not reflect specific views of the USDA.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAmir A, McDonald D, Navas-Molina JA, et al.: Deblur rapidly resolves single-nucleotide community sequence patterns. mSystems. 2017; 2(2): pii: e00191-16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBengtsson-Palme J, Ryberg M, Hartmann M, et al.: Improved software detection and extraction of ITS1 and ITS2 from ribosomal ITS sequences of fungi and other eukaryotes for analysis of environmental sequencing data. Methods Ecol Evol. 2013; 4(10): 914–919. Publisher Full Text\n\nBushnell B, Rood J, Singer E: BBMerge - Accurate paired shotgun read merging via overlap. PLoS One. 2017; 12(10): e0185056. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCallahan BJ, McMurdie PJ, Rosen MJ, et al.: DADA2: High-resolution sample inference from Illumina amplicon data. Nat Methods. 2016; 13(7): 581–583. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaporaso JG, Kuczynski J, Stombaugh J, et al.: QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010: 7(5): 335–336. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCock PJ, Antao T, Chang JT, et al.: Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics. 2009; 25(11): 1422–1423. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCock PJ, Fields CJ, Goto N, et al.: The Sanger FASTQ file format for sequences with quality scores, and the Solexa/Illumina FASTQ variants. Nucleic Acids Res. 2010; 38(6): 1767–1771. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEddy SR: Accelerated profile HMM searches. PLoS Comput Biol. 2011; 7(10): e1002195. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdgar RC: UNOISE2: improved error-correction for Illumina 16S and ITS amplicon sequencing. bioRxiv. 2016; 081257. Publisher Full Text\n\nGardes M, Bruns TD: ITS primers with enhanced specificity for basidiomycetes--application to the identification of mycorrhizae and rusts. Mol Ecol. 1993; 2(2): 113–118. PubMed Abstract | Publisher Full Text\n\nKõljalg U, Nilsson RH, Abarenkov K, et al.: Towards a unified paradigm for sequence-based identification of fungi. Mol Ecol. 2013; 22(21): 5271–7. PubMed Abstract | Publisher Full Text\n\nMeredith M, Kruschke J: HDInterval: Highest (posterior) density intervals. 2018. Reference Source\n\nNilsson RH, Ryberg M, Abarenkov K, et al.: The ITS region as a target for characterization of fungal communities using emerging sequencing technologies. FEMS Microbiol Lett. 2009; 296(1): 97–101. PubMed Abstract | Publisher Full Text\n\nNilsson RH, Veldre V, Hartmann M, et al.: An open source software package for automated extraction of ITS1 and ITS2 from fungal ITS sequences for use in high-throughput community assays and molecular ecology. Fungal Ecol. 2010; 3(4): 284–287. Publisher Full Text\n\nRivers AR: ITSxpress. [software repository]. 2018a. http://www.doi.org/10.5281/zenodo.1317575\n\nRivers AR: ITSxpress: Software to rapidly trim internally transcribed spacer sequences with quality scores for marker gene analysis [data set]. 2018b. http://www.doi.org/10.5281/zenodo.1317585\n\nRognes T, Flouri T, Nichols B, et al.: VSEARCH: a versatile open source tool for metagenomics. PeerJ. 2016; 4: e2584. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchoch CL, Seifert KA, Huhndorf S, et al.: Nuclear ribosomal internal transcribed spacer (ITS) region as a universal DNA barcode marker for Fungi. Proc Natl Acad Sci U S A. 2012; 109(16): 6241–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeber KC, Rivers AR: Q2-ITSxpress [software repository]. 2018. http://www.doi.org/10.5281/zenodo.1317579\n\nWhite TJ, Bruns T, Lee S, et al.: Amplification and direct sequencing of fungal ribosomal RNA genes for phylogenetics. In PCR Protocols: A Guide to Methods and Applications. San Diego. 1990; 315–322. Publisher Full Text"
}
|
[
{
"id": "38059",
"date": "19 Sep 2018",
"name": "J. Gregory Caporaso",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present ITSxpress, an approach for trimming flanking rRNA regions from ITS sequence data. This is effectively a replacement for ITSx, which performed this process on fasta files after sequence clustering. By working with fastq data, ITSxpress allows this trimming to be applied to fastq files, which in turn allows for the application of modern amplicon analysis workflows.\n\nMajor points: The user may also select dereplication from 98% to 100% identity.\nThe above sentence should probably be replaced with the following, since dereplication usually refers to clustering at 100% identity: The user may also choose to cluster at between 98% and 100% identity.\nThe start and stop position of each cluster representative is then used to trim each sequence in the cluster\nHow does this work if there were insertions and/or deletions between the cluster representative sequence and each cluster member? Wouldn't the position numbers be incorrect in that case?\n\nHow does a lower percent identity threshold for clustering impact accuracy (as compared to ITSx) and runtime? I'm wondering if it's worth it, for example, to run the clustering step at 98%, or maybe even lower, for quicker run time. Exploring accuracy and runtime as functions of percent identity seems like a missing piece of this study since in some cases the runtimes can be fairly long (e.g., 80 minutes) for a pre-processing step.\n\nMinor points: miss-assignments should be mis-assignments\nOTU’s is used in several places where OTUs should be used\nIt merges (if paired) and trims all samples\nShould that say \"trims all sequences\"?\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "38810",
"date": "24 Oct 2018",
"name": "Johanna B. Holm",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis report introduces an updated version of ITSx making the tool applicable to exact sequence variant analyses as opposed to operational taxonomic unit analyses. The tool is meant to be implemented upstream of sequence variant calling algorithms, producing dereplicated and merged reads. These reads can then be used in the deblur or dada2 algorithms for calling the exact sequence variants.\n\nMinor Issues:\n\nMethods/Operation: Because there is clear instruction for use of the ITSxpress product in qiime2, it would be helpful to add a sentence regarding the use of the product in the dada2 workflow, as most users are accustomed to running the forward and reverse reads through the dada2 algorithm separately, and merging afterwards. Would the correct methodology be to run dada2(ITSxpress file) followed by sequence table production (skipping the merge step)? Methods/Implementation: replace \"sorted by abundance and clustered\" with \"sorted by abundance and dereplicated\", for clarity.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1418
|
https://f1000research.com/articles/5-386/v1
|
22 Mar 16
|
{
"type": "Data Note",
"title": "The ICR142 NGS validation series: a resource for orthogonal assessment of NGS analysis",
"authors": [
"Elise Ruark",
"Anthony Renwick",
"Matthew Clarke",
"Katie Snape",
"Emma Ramsay",
"Anna Elliott",
"Sandra Hanks",
"Ann Strydom",
"Sheila Seal",
"Nazneen Rahman",
"Elise Ruark",
"Anthony Renwick",
"Matthew Clarke",
"Katie Snape",
"Emma Ramsay",
"Anna Elliott",
"Sandra Hanks",
"Ann Strydom",
"Sheila Seal"
],
"abstract": "To provide a useful community resource for orthogonal assessment of NGS analysis software, we present the ICR142 NGS validation series. The dataset includes high-quality exome sequence data from 142 samples together with Sanger sequence data at 730 sites; 409 sites with variants and 321 sites at which variants were called by an NGS analysis tool, but no variant is present in the corresponding Sanger sequence. The dataset includes 286 indel variants and 275 negative indel sites, and thus the ICR142 validation dataset is of particular utility in evaluating indel calling performance. The FASTQ files and Sanger sequence results can be accessed in the European Genome-phenome Archive under the accession number EGAS00001001332.",
"keywords": [
"Variant calling",
"next-generation sequencing",
"NGS",
"exome",
"indel",
"validation"
],
"content": "Introduction\n\nNext-generation sequencing (NGS) approaches have greatly enhanced our ability to detect genetic variation. Over the past decade NGS hardware, software, throughput, data quality and analytical tools have evolved dramatically. Thorough evaluation of each new laboratory and analytical development is challenging but necessary to fully understand how pipeline modification can impact results. To fully assess performance, NGS analysis tools should ideally be run on samples with pre-determined positive and negative sites assessed through orthogonal experimentation such as Sanger sequencing.\n\nOver the past five years, we have generated extensive data on thousands of samples using different NGS instruments, sequencing chemistry, gene panels, exome captures and variant calling tools. Fortuitously, during this process we have generated orthogonal validation data using Sanger sequencing for a core set of 142 samples that were included in the majority of our experiments. We now formally use these samples, which we call the ICR142 NGS validation series, to evaluate NGS variant calling performance after any change to experimental or analytical protocols. This series has proved an extremely useful resource for our assessment of NGS analysis in both the research and clinical settings. We believe that it may also have utility for others, and hence are making it available here.\n\n\nMaterials and methods\n\nWe used lymphocyte DNA from 142 unrelated individuals. All individuals were recruited to the BOCS study and have given informed consent for their DNA to be used for genetic research. The study is approved by the London Multicentre Research Ethics Committee (MREC/01/2/18)\n\nOver the last five years we have generated data from the ICR142 validation series using different exome captures which we have analysed with multiple aligner/caller combinations1–6. To date we have generated Sanger sequence data for 730 sites amongst the 142 individuals. These sites include variants called by only one aligner and caller combination, increasing the representation of sites which can discriminate performance between methods.\n\nTo generate the Sanger sequence data, we performed PCR reactions using the Qiagen Multiplex PCR kit, and bidirectional sequencing of resulting amplicons using the BigDye terminator cycle sequencing kit and an ABI3730 automated sequencer (ABI PerkinElmer). All sequencing traces were analysed with both automated software (Mutation Surveyor version 3.10, SoftGenetics) and visual inspection.\n\nWe considered a site negative for a base substitution if the specific base substitution was not present, resulting in 46 negative base substitution sites. We considered a site negative for an indel if no indel, of any kind, was detected in the sequencing trace, resulting in 275 negative indel sites. We annotated confirmed variants with the HGVS-compliant CSN standard using CAVA (version 1.1.0) according to the transcripts designated in Supplementary table 17. There were 123 confirmed base substitution variants and 286 confirmed indel variants (Figure 1, Supplementary table 1).\n\nWe have also generated high-quality exome sequencing data for the ICR142 NGS validation series. We prepared DNA libraries from 1.5 µg genomic DNA using the Illumina TruSeq sample preparation kit. DNA was fragmented using Covaris technology and the libraries were prepared without gel size selection. We performed target enrichment in pools of six libraries (500 ng each) using the Illumina TruSeq Exome Enrichment kit. The captured DNA libraries were PCR amplified using the supplied paired-end PCR primers. Sequencing was performed with an Illumina HiSeq2000 (SBS Kit v3, one pool per lane) generating 2×101 bp reads. CASAVA v1.8.1 (Illumina) was used to demultiplex and create FASTQ files per sample from the raw base call files.\n\nAll of the 730 sites had at least 15× coverage in the exome data, defined as at least 15 reads of good mapping quality (mapping score ≥20). Because these sites are well covered, we can readily assess the variant calling performance of any software tool by applying the pipeline to the exome sequencing data and comparing the variant calls with the Sanger sequencing dataset.\n\n\nData availability\n\nWe have deposited the FASTQ files for all 142 individuals in the European Genome-phenome archive (EGA). The accession number is EGAS00001001332. Details of how to request access to the data are available at: www.icr.ac.uk/icr142.\n\nResearchers and authors that use the ICR142 NGS validation series should reference this paper and should include the following acknowledgement: \"This study makes use of the ICR142 NGS validation series data generated by Professor Nazneen Rahman’s team at The Institute of Cancer Research, London”.",
"appendix": "Author contributions\n\n\n\nN.R. and E.Ru. designed the experiment. A.R., E.Ra. and SH generated the exome data. E.Ru. and A.E. undertook data management, S.S., A.R., and K.S. undertook sample management and Sanger validations. M.C. and A.S. undertook the data and administrative management required for data to be accessible. E.Ru. and N.R. wrote the manuscript. All authors contributed to the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nWe acknowledge NHS funding to the NIHR Biomedical Research Centre at The Royal Marsden and the ICR. This study was funded by the Institute of Cancer Research, London.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are grateful to the Scientific Computing Team at the Institute of Cancer Research for provision of HPC services. We are grateful to Peter Humburg, Andy Rimmer, Manuel Rivas and Peter Donnelly for undertaking some of the aligner/caller comparisons.\n\n\nSupplementary material\n\nSupplementary table 1. Sanger sequencing results for 730 sites in the ICR142 NGS validation series. Confirmed variants are annotated according to the designated transcript by CAVA using CSN7.\n\nThe description of the column headings are given below:\n\nSample – sample name in the ICR142 series\n\nGene – HGNC symbol\n\nSangerCall – the most 3’ representation annotated with CSN\n\nType – “bs”, “del”, “ins”, “complex”, or “indel” for base substitutions, simple deletions, simple insertions, complex indels, or negative indel sites, respectively\n\nTranscript – the ENST ID from Ensembl v65 used to annotate the Sanger call\n\nChr – chromosome\n\nEvaluatedPosition – evaluated hg19 site position, centre of designed amplicon\n\nPOS – the left-aligned position in hg19 coordinates for variants called in exome data by Platypus v0.1.5\n\nREF – the reference allele in hg19 for variants called in exome data by Platypus v0.1.5\n\nALT – the alternative allele in hg19 for variants called in exome data by Platypus v0.1.5\n\n\nReferences\n\nLunter G, Goodson M: Stampy: a statistical algorithm for sensitive and fast mapping of Illumina sequence reads. Genome Res. 2011; 21(6): 936–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009; 25(14): 1754–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRivas MA, Beaudoin M, Gardet A, et al.: Deep resequencing of GWAS loci identifies independent rare variants associated with inflammatory bowel disease. Nat Genet. 2011; 43(11): 1066–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKenna A, Hanna M, Banks E, et al.: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010; 20(9): 1297–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRimmer A, Mathieson I, Lunter G, et al.: Platypus: An Integrated Variant Caller. 2012. Reference Source\n\nSOFTGENETICS: NextgGENe® software for Next Generation (NGS) sequence analysis. Reference Source\n\nMünz M, Ruark E, Renwick A, et al.: CSN and CAVA: variant annotation tools for rapid, robust next-generation sequencing analysis in the clinical setting. Genome Med. 2015; 7(1): 76. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "13347",
"date": "21 Apr 2016",
"name": "Richard Bagnall",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nA myriad of software tools have been developed for the alignment of next generation sequencing data to a reference genome and for the subsequent genotyping of DNA variants. Evaluating the specificity and sensitivity of a variant calling framework can be achieved with a dataset containing validated genotypes. Ruark et. al. provide the ‘ICR142 NGS validation series’ exome sequence fastq files of 142 individuals, and a large set of corresponding Sanger sequencing validated variant sites and sites where variants were called by an NGS tool, but no variant was found with the corresponding Sanger sequencing.I found the NGS dataset to be easily accessible, on request, from the European Genome-phenome archive and it comprises paired end fastq sequencing files generated by an Illumina sequencing system on the stated 142 individuals. The Sanger sequencing dataset is available as supplementary table 1 of the manuscript. This is a useful resource for evaluating variant calling pipelines.",
"responses": []
},
{
"id": "13013",
"date": "03 May 2016",
"name": "Brad Chapman",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe ICR142, a publicly available set of fastq files and confirmed true and false variants for validating analysis pipelines. This is an incredibly useful community resource that complements existing efforts like the Genome in a Bottle project by providing a set of validated, difficult regions to evaluate variant detection tools. I appreciate the efforts to make these test sets public; instead of having validation sets like these developed internally at clinical laboratories, we can collaborate and improve them publicly.In collaboration with Oliver Hofmann at the Wolfson Wohl Cancer Research Center (https://twitter.com/fiamh) we obtained access to the data and were able to run a validation using bcbio variant calling (http://bcbio-nextgen.readthedocs.io). In doing this, we tried to address a couple of challenges for other users wanting to make immediate use of this data in their own in hour validation work:The truth sets are not easy to plug into existing validation frameworks. Most validation tools like rtg vcfeval and hap.py work from VCF format files, while this truth set is in a custom spreadsheet format with a mixture of methods for describing changes. You can use Platypus positions for many but need to use CSN descriptions or evaluated position for the remainder. The truth sets don't appear to describe if we expect calls to be homozygous or heterozygous calls at each position. Many existing validation approaches expect a single (or few) samples so coordinating checking and validation for all these samples can be a challenge. As part of this review, we generated a set of configuration files and scripts to help make running validations with ICR142 easier (https://github.com/bcbio/icr142-validation).This comparison work also includes a set of comparisons with common callers (GATK HaplotypeCaller, FreeBayes and VarDict). Several of the Sanger validated regions without variants are false positives in at least 2 of the callers tested, so this dataset exposes some common issues with calling and filtering. It would be useful to hear the author's experience with validating callers using this benchmark set and if they have additional filters used to avoid these problems. Knowing a baseline expectation for results would help ensure that the users understand how correctly they've setup the validation resources.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-386
|
https://f1000research.com/articles/6-1646/v1
|
05 Sep 17
|
{
"type": "Clinical Practice Article",
"title": "A case series report of cancer patients undergoing group body psychotherapy",
"authors": [
"Astrid Grossert",
"Gunther Meinlschmidt",
"Rainer Schaefert",
"Astrid Grossert"
],
"abstract": "Background: Disturbances in bodily wellbeing represent a key source of psychosocial suffering and impairment related to cancer. Therefore, interventions to improve bodily wellbeing in post-treatment cancer patients are of paramount importance. Notably, body psychotherapy (BPT) has been shown to improve bodily wellbeing in subjects suffering from a variety of mental disorders. However, how post-treatment cancer patients perceive and subjectively react to group BPT aiming at improving bodily disturbances has, to the best of our knowledge, not yet been described. Methods: We report on six patients undergoing outpatient group BPT that followed oncological treatment for malignant neoplasms. The BPT consisted of six sessions based on a scientific embodiment approach, integrating body-oriented techniques to improve patients’ awareness, perception, acceptance, and expression regarding their body. Results: The BPT was well accepted by all patients. Despite having undergone different types of oncological treatment for different cancer types and locations, all subjects reported having appreciated BPT and improved how they perceived their bodies. However, individual descriptions of improvements showed substantial heterogeneity across subjects. Notably, most patients indicated that sensations, perceptions, and other mental activities related to their own body intensified when proceeding through the group BPT sessions. Conclusion: The findings from this case series encourage and inform future studies examining whether group BPT is efficacious in post-treatment cancer patients and investigating the related mechanisms of action. The observed heterogeneity in individual descriptions of perceived treatment effects point to the need for selecting comprehensive indicators of changes in disturbances of bodily wellbeing as the primary patient-reported outcome in future clinical trials. While increases in mental activities related to their own body are commonly interpreted as important mechanisms of therapeutic action in BPT, follow-up assessments are needed to evaluate intended and unintended consequences of these changes in cancer patients.",
"keywords": [
"body image",
"body integrity",
"body therapy",
"case report",
"group psychotherapy",
"malignant neoplasm",
"movement therapy",
"tumor"
],
"content": "Introduction\n\nCancer is related to high individual and societal burden worldwide, which is caused not only by mortality, but also morbidity and impairment, as indicated by recent analyses in the context of the Global Burden of Disease study and other consortia1–3. Notably, while a significant proportion of cancer-related burden directly originates from the neoplasm and its treatment, psychosocial impairment represents another substantial aspect of cancer-related burden, which is triggered by the experiences and suffering related to cancer, that often persists beyond successful treatment of the tumor itself4–8.\n\nDisturbances in bodily wellbeing represent one key aspect of these psychosocial impairments related to cancer9,10. Notably, the concept of cancer-related disturbances in bodily wellbeing and related constructs, such as perceived body integrity and body image, have been present for a long time, but with varying and sometimes conflicting definitions10–13. In a recent report and analysis, Rhoten examined the concept of ‘body image disturbances’ in the context of cancer. She identified three relevant attributes of body image disturbance: (1) self-perception of a change in appearance and displeasure with the change or perceived change in appearance; (2) decline in an area of function; and (3) psychological distress regarding changes in appearance and/or function12. In line with these attributes, others have stressed that body image is a multidimensional construct, including elements such as perceptions, feelings, and attitudes toward the body, with body image disturbances being highly prevalent in cancer patients10. The broader term ‘disturbances in bodily wellbeing’ (or ‘bodily disturbances’) acknowledges the fact that subjective bodily disturbances related to cancer are multifaceted, including key aspects often subsumed under the term ‘body image disturbance’, such as disturbances and related distress in perceptions, feelings, and attitudes toward the body, as well as in subjective appearance and/or function of the body. The concept of bodily disturbances builds on the understanding of the polar relations between bodily and mental processes and experiences. This body-mind perspective has gained considerable attention over the past decade within the embodied cognition or embodiment approach14. In the context of the interventions described in this article, we put experiencing of the body in focus as a central pathway to experiencing and regulating the self, and regard the human being as a subjectively experiencing and embodied acting being that creates meaning in relation to the world15,16.\n\nHealth-related quality of life improvements in cancer patients are typically intended to be achieved using medical interventions, such as the provision of cytotoxic agents8. Additionally, psychosocial interventions, including psychotherapy, for cancer patients and their caregivers have gained considerable attention in the past years17. However, meta-analyses of their efficacy have yielded mixed results, with some suggestive of rather small or no effects associated with classical interventions, such as cognitive behavioral therapy, which are well-established and largely effective in many non-cancer contexts18–21. With regard to interventions that act via movement, there is ample evidence that physical activity and exercise i) are safe and feasible for cancer patients, ii) are linked to reduced cancer incidence (prevention), and iii) improve the quality of life and function of cancer patients, but with mostly small effects22–26.\n\nFurthermore, the past several years have seen a renaissance of interventions focusing on existential topics, such as meaning, which have been applied to patients with a variety of life-threatening diseases (e.g., cancer27–32), including ‘Meaning-Centered Group Psychotherapy’, with the aim of helping patients with advanced cancer sustain or enhance a sense of meaning, peace and purpose in their lives33. There is some evidence that body psychotherapy (BPT), defined as ‘psychotherapeutic treatment of mental disease or suffering, concomitantly using bodily and mental psychotherapeutic means’34, is efficacious for the treatment of different mental disorders35–38.\n\nHowever, to the best of our knowledge, there are no data on the application of BPT in the context of cancer. Notably, given that BPT explicitly targets bodily aspects, such as perceptions, feelings, and attitudes toward the body, that are of paramount importance in the context of bodily disturbances in cancer patients (see above), elucidating BPT as an intervention to reduce disturbances of bodily wellbeing appears to be highly promising.\n\nTo conclude, disturbances of bodily wellbeing are highly prevalent in subjects with malignant neoplasm, and wellbeing often subsides despite successful interventions targeting the tumor itself. Therefore, identifying how to treat disturbances of bodily wellbeing in post-treatment cancer patients and therapeutic mechanisms is highly warranted, with BPT representing a highly promising and innovative approach.\n\nTo provide insight into the potential use of BPT in post-treatment cancer patients and to inform potential future clinical trials on this topic, we report on a series of six post-treatment cancer patients receiving group BPT aiming at improving bodily disturbances, focusing on how the patients perceived and subjectively reacted to the intervention. These patients are unique, and, to the best of our knowledge, these are the first published reports on the application of BPT in the context of cancer.\n\n\nMethods\n\nBPT was conducted by the first author of this case series (female, age = 35 years, Swiss citizen, trained psychologist and physiotherapist, four years of postgraduate training in body psychotherapy) and took place between October and December 2016. We describe here the key features of the intervention (structure of the description based on the ‘reporting recommendations for group-based behavior-change interventions’39 enriched by relevant elements from the ‘Template for Intervention Description and Replication (TIDieR) checklist and guide’40).\n\nBPT represents an experience-oriented approach34, grounded in the notion that bodily and mental experiences and processes (including more existential topics, such as meaning in life) are closely and mutually related. More specifically, BPT takes advantage of the embodied, enactive and environmentally embedded nature of basic cognition, emotion, intersubjectivity, and experiencing41. In line with recently developed group body psychotherapy manuals for mental disorders (e.g., somatoform disorder, depression) that have been derived from disorder-specific intervention strategies37,42–44, our group BPT followed a general BPT framework. Therefore, it focused on specific pathological processes of key relevance in post-treatment cancer patients when applying body-oriented techniques to improve patients’ bodily awareness, perception, emotional connectedness, acceptance, and expression45.\n\nThe overall goal of this group BPT was to resolve bodily disturbances that are caused or triggered by the antecedent cancer and related treatments. Therefore, the group BPT aimed at supporting patients in learning to cope with untoward bodily sensations, feelings, and disturbances, such as changes in overt body image46,47, as well as changes in attitudes toward and perceptions of their own body48, including feelings of insecurity and vulnerability49–51, stigmatization52, impaired functioning51,53, and feelings of disconnectedness from one’s own body50.\n\nWith regard to the setting, the BPT group (six participants) was provided under the auspices of the Krebsliga Beider Basel at their facilities. The group BPT consisted of six sessions of approximately 90 minutes each.\n\nWe chose a group setting because it offers advantages over individual therapy, including facilitation of specific therapeutic factors, such as vicarious learning and economic benefits54,55, in the absence of strong evidence suggesting clear superiority in outcomes of one setting over the other when comparing group and individual psychotherapies56–58.\n\nThe six sessions covered the following topics:\n\n1) General introduction, fostering of group cohesion and focus on bodily perception;\n\n2) Focus on bodily resources and grounding;\n\n3) Focus on closeness and distance regulation;\n\n4) Focus on social interactions and bodily impulses;\n\n5) Focus on embodied emotions; and\n\n6) Summary and transfer session.\n\nSequencing of the sessions was fixed, and every session consisted of four parts:\n\nA) Introduction: Brief bodily exercise and exchange, preparing the specific topic of the session; review of the home task assigned during the past session, where appropriate;\n\nB) Exercise: Psycho-educational element and an exercise triggering embodied experiences, focusing on the specific topic of the session;\n\nC) Sharing: Exchange of experiences;\n\nD) Closing: Résumé and farewell, hometask assignment, and outlook.\n\nAn outline of the content of each group BPT session is provided as Supplemental material (see Supplementary File 1). The execution of each session was tailored to the composition of the patient group and respective needs, acknowledging group processes that need to be addressed in connection with the content of each session.\n\nThe tools used during the sessions included materials, such as mats, ropes and balls.\n\nThe group BPT sessions were preceded by initial individual sessions (one per patient, maximum duration of 50 minutes) that were structured and documented using the basis documentation for Psycho-Oncology (PO-Bado)59, assessing sociodemographics, medical history, main symptoms, previous experiences, and individual core topics of relevance within the scope of the intervention. Additionally, validated questionnaires assessing distress and bodily wellbeing (German version of the Body Appreciation Scale [BAS]) were applied60,61.\n\nThe patients were offered an additional facultative individual consultation session (one per patient, maximum duration of 50 minutes) following the last group BPT session to address open questions or to ensure ongoing therapeutic support.\n\nThe group leader took written notes of key statements of the participants during and immediately after the six group therapy sessions. To collect information on the patients’ perspective of the therapeutic outcome of this group BPT, the participants provided written feedback at the end of the six sessions, focusing on the following topics: i) perceived changes, including subjective changes related to the perception of their own bodies; ii) group climate and cohesion; and iii) possibilities of creating new personal ties.\n\nThe data and collected information were integrated as follows: The key statements of the participants, enriched by information collected via PO-Bado, were sorted by the first author according to identified themes. These themes were derived from i) distress categories provided by the PO-Bado, and from ii) the list of psychosocial problems provided by the distress thermometer. This was complemented by iii) a resource perspective, and iv) common themes related to bodily experiences, focusing on the main topics of the respective sessions. The statements were then summarized, evaluated, and interpreted from a clinician perspective.\n\n\nResults\n\nInformation on the six patients, including demographic and other patient-specific information, main symptoms and concerns, medical and psychosocial history, past oncological interventions and current cancer disease status, is provided in Table 1. The timeline of initial cancer diagnosis, medical treatment and BPT is depicted in Figure 1. Notably, there were two other post-treatment cancer patients who were interested in participating in the BPT, but ultimately did not participate in the intervention (no further information is provided here because no informed consent to report on their cases in scientific publications was obtained from these subjects). Of these eight patients (six participating and two non-participating in the group BPT), five were informed of the group BPT service by the Krebsliga Beider Basel, and three patients were informed of the group by the Department of Psychosomatics at the University Hospital Basel.\n\nAbbreviations: BAS, Body Appreciation Scale; ECOG, Eastern Cooperative Oncology Group\n\nFootnotes:\n\na) Distress thermometer60: visual analog scale, ranging from not stressed = 0 to extremely stressed = 10\n\nb) Selected items from the basis documentation for psycho-oncology59, scale from 0 to 4: 0 = not at all; 1 = slightly; 2 = moderate; 3 = much; 4 = very much\n\nc) Performance score of the ECOG66; 0 = fully active, able to perform all pre-disease tasks without restriction; 1 = restricted in physically strenuous activity but ambulatory and able to perform light or sedentary tasks, e.g., light house work, office work; 2 = ambulatory and capable of all self-care but unable to carry out any work activities; up and about more than 50% of waking hours; 3 = capable of only limited self-care; confined to bed or a chair more than 50% of waking hours; 4 = completely disabled; cannot perform any self-care; completely confined to bed or a chair; 5 = dead\n\nd) Validated German version of the 13-item BAS61, with higher scores reflecting greater body appreciation\n\nBPT, body psychotherapy; Dec, December; Jan, January; Oct, October; Sep, September. a) Two patients were still receiving hormonal treatment while they were attending the group body psychotherapy sessions.\n\nThe key statements of the patients are reported in Table 2 (for privacy protection, the statements are collapsed across patients). Notably, specific descriptions of progress showed substantial heterogeneity across subjects. Most patients indicated that sensations, perceptions, and other mental activities related to their own body intensified throughout the group BPT sessions.\n\nThe statements reported here illustrate the most relevant topics and statements made by the participants throughout the therapeutic process. The collection is based on written notes from the group psychotherapist recorded during and directly after the sessions.\n\nFootnote: a) To protect the participants’ privacy, the statements are reported without assigning them to individual patients.\n\nAt the beginning of the group BPT, a majority (5/6) of the participants referred to feelings of being left alone and partially helpless with the disease. Disturbances of bodily wellbeing and feelings of insecurity were commonly reported (4/6).\n\nAt the end of the six BPT sessions, most participants reported improvements in wellbeing (5/6). They mentioned being more aware of physical and emotional boundaries and, therefore, having better knowledge of their coping strategies in conjunction with stress reduction in daily life (5/6). One participant referred to the observation of having gained a new sense of wholeness between body and soul.\n\nIn response to the question “what has been supportive and what has felt effective”, most (5/6) patients stated that they enjoyed the exchange between cancer patients and learning about similar or even completely different experiences related to cancer. Others (3/6) mentioned having time and room to explore bodily wellbeing. One subject mentioned the solidarity and empathy within the group.\n\nAll participants reported feeling comfortable (4/6) or very comfortable (2/6) during the group sessions.\n\nAll participants reported that they felt they were being taken seriously and were supported, and they reported having enjoyed participating in the sessions.\n\nThe majority of the participants (4/6) reported that they were neither under- nor over-challenged by the sessions. One participant reported that the movement sessions were challenging due to unfamiliarity, and one person with serious hearing problems was challenged in understanding and following exercises without direct visual contact.\n\nIn response to the question “what was difficult for you”, half (3/6) of the participants replied that their perception was that not all participants had the same willingness or readiness to share their experiences in the group context. One participant stated not having felt a need to perform movement-related exercises, and, therefore, the related sessions were perceived as being too numerous.\n\nAll participants reported that they would recommend this group BPT intervention to other patients.\n\nWith regard to their satisfaction with the number of sessions provided, half of the participants (3/6) were satisfied with six sessions, and half of the participants (3/6) wished to have a minimum of two more sessions (one subject would have preferred 12 sessions).\n\nNo adverse or unanticipated events were reported. All participants were able to ask questions or formulate their concerns. One subject stated that the room was too cold.\n\n\nDiscussion\n\nDespite the subjects having undergone different types of medical treatment for different cancer types and locations, all reported having appreciated the intervention and having progressed in how they perceived their body.\n\nMost of the participants stated that they felt ready to address the bodily dimension of the experience of being affected by cancer and its treatments. The participants also appreciated that the group BPT offered space for them to achieve a new level of experiencing themselves from new perspectives. There were indications that the (bodily) presence of participants was enhanced across the group BPT sessions.\n\nNevertheless, group attendance and confrontation with one’s body (feelings and expression) were sometimes challenging and triggered temporary uncertainty in some subjects. Sharing these and other (bodily) experiences was helpful for integrating the bodily experiences.\n\nThe structure of the group BPT appeared to be appropriate to achieve the intended goals. Addressing body experiences and vulnerability following the experience of cancer was highly appreciated in this population. The participants felt positively about gaining new perspectives with regard to their bodily sensations and becoming more aware of relationships between thoughts, emotions and bodily actions. Despite the limited duration of the intervention, patients appeared to transfer this new knowledge into actions in their daily life.\n\nThis case series has several strengths. To the best of our knowledge, this is the first report on group BPT for cancer patients. The study participants showed certain heterogeneity regarding age, gender, and cancer type, increasing the generalizability of the findings. Furthermore, we collected information not only on the overall subjective outcomes, but also on the perception of therapeutic processes. There are also several limitations that should be noted. We only collected information up to the end of the group BPT intervention. Future studies should also include follow-up assessments. Furthermore, application of established assessment instruments will provide important additional and complementary information on therapeutic efficacy of the intervention. Last, additional information, such as data collected via video or audio-taping, will provide additional methods for exploring therapeutic processes related to the group BPT in more detail.\n\nWith regard to future studies, the observed heterogeneity in individual descriptions of perceived treatment effects point to the need to select rather comprehensive indicators of changes in disturbances of bodily wellbeing as a primary patient-reported outcome in future clinical trials. Patients reporting on group BPT triggering changes with regard to the meaning in life and other existential topics encourage more detailed exploration of this domain. Furthermore, linking and reconciling integrative BPT approaches, such as ‘integrative body therapy’ that assumes a ‘selfreflexive socioculturally embedded subject’, with recent insights from the fields of neuroscience and psychobiology, may be essential, though challenging, to fully exploit the current understanding and newly gained knowledge of cancer-related disturbances in bodily wellbeing and related interventions16,62–65.\n\n\nConclusion\n\nThe findings from this case series encourage and inform future studies aimed at identifying whether group BPT is efficacious in post-treatment cancer patients and related mechanisms of action. The observed heterogeneity in individual descriptions of detailed treatment effects point to the need to select rather comprehensive indicators of changes in disturbances of bodily wellbeing as a primary outcome in future clinical trials. While increases in mental activities related to one’s own body are commonly interpreted as an important mechanism of therapeutic action, follow-up assessments are needed to evaluate the intended, as well as unintended, consequences of these increases.\n\n\nConsent\n\nEach patient provided written informed consent to report his/her case, including clinical and diagnostic information within this case series.\n\n\nEthics statement\n\nEthical clearance was acquired from the Ethikkommission Nordwest- und Zentralschweiz in Basel, Switzerland (EKNZ) (EKNZ BASEC Req-2017-00513).\n\n\nSupplementary Material\n\nSupplementary file 1: Outline of the content of the group BPT sessions\n\nClick here to access the data.",
"appendix": "Competing interests\n\n\n\nAG had a mandate from the Krebsliga Beider Basel to conduct this group intervention “Krebs und Körperwahrnehmung”. GM has received funding from the Korea Research Foundation within the Global Research Network Program under project no. 2013S1A2A2035364, and from the Swiss National Science Foundation under project no. 100014_135328. GM has been acting as consultant for Janssen Research & Development, LLC. From the other author (RS) no potential competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declare that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nProvision of the intervention was supported by the Krebsliga Beider Basel (http://www.klbb.ch).\n\nWe thank all participants for taking part in the BPT and for enriching the group, as well as for their openness toward the publication of this case series.\n\n\nReferences\n\nGlobal Burden of Disease Cancer Collaboration, Fitzmaurice C, Allen C, et al.: Global, Regional, and National Cancer Incidence, Mortality, Years of Life Lost, Years Lived With Disability, and Disability-Adjusted Life-years for 32 Cancer Groups, 1990 to 2015: A Systematic Analysis for the Global Burden of Disease Study. JAMA Oncol. 2017; 3(4): 524–48. PubMed Abstract | Publisher Full Text\n\nTsilidis KK, Papadimitriou N, Capothanassi D, et al.: Burden of Cancer in a Large Consortium of Prospective Cohorts in Europe. J Natl Cancer Inst. 2016; 108(10): djw127. PubMed Abstract | Publisher Full Text\n\nSoerjomataram I, Lortet-Tieulent J, Ferlay J, et al.: Estimating and validating disability-adjusted life years at the global level: a methodological framework for cancer. BMC Med Res Methodol. 2012; 12: 125. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLehmann V, Hagedoorn M, Tuinman MA: Body image in cancer survivors: a systematic review of case-control studies. J Cancer Surviv. 2015; 9(2): 339–48. PubMed Abstract | Publisher Full Text\n\nMitchell AJ, Ferguson DW, Gill J, et al.: Depression and anxiety in long-term cancer survivors compared with spouses and healthy controls: a systematic review and meta-analysis. Lancet Oncol. 2013; 14(8): 721–32. PubMed Abstract | Publisher Full Text\n\nSeitz DC, Besier T, Debatin KM, et al.: Posttraumatic stress, depression and anxiety among adult long-term survivors of cancer in adolescence. Eur J Cancer. 2010; 46(9): 1596–606. PubMed Abstract | Publisher Full Text\n\nMaass SW, Roorda C, Berendsen AJ, et al.: The prevalence of long-term symptoms of depression and anxiety after breast cancer treatment: A systematic review. Maturitas. 2015; 82(1): 100–8. PubMed Abstract | Publisher Full Text\n\nRadice D, Redaelli A: Breast cancer management: quality-of-life and cost considerations. Pharmacoeconomics. 2003; 21(6): 383–96. PubMed Abstract | Publisher Full Text\n\nMa JX, Sun JD, Fu ZT, et al.: [Estimation of disability weights on malignant neoplasms in Shandong province]. Zhonghua Liu Xing Bing Xue Za Zhi. 2008; 29(12): 1208–12. PubMed Abstract | Publisher Full Text\n\nRhondali W, Chisholm GB, Filbet M, et al.: Screening for body image dissatisfaction in patients with advanced cancer: a pilot study. J Palliat Med. 2015; 18(2): 151–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeo I, Novy DM, Chang DW, et al.: Examining pain, body image, and depressive symptoms in patients with lymphedema secondary to breast cancer. Psychooncology. 2015; 24(11): 1377–83. PubMed Abstract | Publisher Full Text\n\nRhoten BA: Body image disturbance in adults treated for cancer - a concept analysis. J Adv Nurs. 2016; 72(5): 1001–11. PubMed Abstract | Publisher Full Text\n\nRöhricht F, Seidler KP, Joraschky P, et al.: [Consensus paper on the terminological differentiation of various aspect of body experience]. Psychother Psychosom Med Psychol. 2005; 55(3–4): 183–90. PubMed Abstract | Publisher Full Text\n\nWilson M: Six views of embodied cognition. Psychon Bull Rev. 2002; 9(4): 625–36. PubMed Abstract | Publisher Full Text\n\nGeuter U: Body Psychotherapy: Experiencing the Body, Experiencing the Self. International Body Psychotherapy Journal. 2016; 15: 6–19. Reference Source\n\nPetzold HG: Der „informierte Leib“ - ,,embodied and embedded“ – Leibgedächtnis und performative Synchronisationen. Polyloge. 2017; 3(Neueinstellung von 2002j/2017). Reference Source\n\nOkuyama T, Akechi T, Mackenzie L, et al.: Psychotherapy for depression among advanced, incurable cancer patients: A systematic review and meta-analysis. Cancer Treat Rev. 2017; 56: 16–27. PubMed Abstract | Publisher Full Text\n\nSpiegel D: Minding the body: psychotherapy and cancer survival. Br J Health Psychol. 2014; 19(3): 465–85. PubMed Abstract | Publisher Full Text\n\nZhang M, Huang L, Feng Z, et al.: Effects of cognitive behavioral therapy on quality of life and stress for breast cancer survivors: a meta-analysis. Minerva Med. 2017; 108(1): 84–93. PubMed Abstract | Publisher Full Text\n\nJassim GA, Whitford DL, Hickey A, et al.: Psychological interventions for women with non-metastatic breast cancer. Cochrane Database Syst Rev. 2015; 28(5): CD008729. PubMed Abstract | Publisher Full Text\n\nO'Toole MS, Zachariae R, Renna ME, et al.: Cognitive behavioral therapies for informal caregivers of patients with cancer and cancer survivors: a systematic review and meta-analysis. Psychooncology. 2017; 26(4): 428–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBuffart LM, Kalter J, Sweegers MG, et al.: Effects and moderators of exercise on quality of life and physical function in patients with cancer: An individual patient data meta-analysis of 34 RCTs. Cancer Treat Rev. 2017; 52: 91–104. PubMed Abstract | Publisher Full Text\n\nSpahn G, Choi KE, Kennemann C, et al.: Can a multimodal mind-body program enhance the treatment effects of physical activity in breast cancer survivors with chronic tumor-associated fatigue? A randomized controlled trial. Integr Cancer Ther. 2013; 12(4): 291–300. PubMed Abstract | Publisher Full Text\n\nHeywood R, McCarthy AL, Skinner TL: Safety and feasibility of exercise interventions in patients with advanced cancer: a systematic review. Support Care Cancer. 2017; 25(10): 3031–3050. PubMed Abstract | Publisher Full Text\n\nGerritsen JK, Vincent AJ: Exercise improves quality of life in patients with cancer: a systematic review and meta-analysis of randomised controlled trials. Br J Sports Med. 2016; 50(13): 796–803. PubMed Abstract | Publisher Full Text\n\nKyu HH, Bachman VF, Alexander LT, et al.: Physical activity and risk of breast cancer, colon cancer, diabetes, ischemic heart disease, and ischemic stroke events: systematic review and dose-response meta-analysis for the Global Burden of Disease Study 2013. BMJ. 2016; 354: i3857. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChochinov HM, Hack T, Hassard T, et al.: Dignity therapy: a novel psychotherapeutic intervention for patients near the end of life. J Clin Oncol. 2005; 23(24): 5520–5. PubMed Abstract | Publisher Full Text\n\nMartinez M, Arantzamendi M, Belar A, et al.: 'Dignity therapy', a promising intervention in palliative care: A comprehensive systematic literature review. Palliat Med. 2017; 31(6): 492–509. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDonato SC, Matuoka JY, Yamashita CC, et al.: Effects of dignity therapy on terminally ill patients: a systematic review. Rev Esc Enferm USP. 2016; 50(6): 1014–24. PubMed Abstract | Publisher Full Text\n\nFitchett G, Emanuel L, Handzo G, et al.: Care of the human spirit and the role of dignity therapy: a systematic review of dignity therapy research. BMC Palliat Care. 2015; 14: 8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKissane DW, Bloch S, Smith GC, et al.: Cognitive-existential group psychotherapy for women with primary breast cancer: a randomised controlled trial. Psychooncology. 2003; 12(6): 532–46. PubMed Abstract | Publisher Full Text\n\nLee V, Cohen SR, Edgar L, et al.: Meaning-making intervention during breast or colorectal cancer treatment improves self-esteem, optimism, and self-efficacy. Soc Sci Med. 2006; 62(12): 3133–45. PubMed Abstract | Publisher Full Text\n\nBreitbart W, Rosenfeld B, Gibson C, et al.: Meaning-centered group psychotherapy for patients with advanced cancer: a pilot randomized controlled trial. Psychooncology. 2010; 19(1): 21–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeuter U: Körperpsychotherapie: Grundriss einer Theorie für die klinische Praxis. Berlin: Springer; 2015. Publisher Full Text\n\nKoemeda-Lutz M, Kaschke M, Revenstorf D, et al.: [Evaluation of the effectiveness of body-psychotherapy in out-patient settings (EEBP)]. Psychother Psychosom Med Psychol. 2006; 56(12): 480–7. PubMed Abstract | Publisher Full Text\n\nPriebe S, Savill M, Wykes T, et al.: Effectiveness of group body psychotherapy for negative symptoms of schizophrenia: multicentre randomised controlled trial. Br J Psychiatry. 2016; 209(1): 54–61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRöhricht F, Papadopoulos N, Priebe S: An exploratory randomized controlled trial of body psychotherapy for patients with chronic depression. J Affect Disord. 2013; 151(1): 85–91. PubMed Abstract | Publisher Full Text\n\nKreuzer PM, Goetz M, Holl M, et al.: Mindfulness-and body-psychotherapy-based group treatment of chronic tinnitus: a randomized controlled pilot study. BMC Complement Altern Med. 2012; 12: 235. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBorek AJ, Abraham C, Smith JR, et al.: A checklist to improve reporting of group-based behaviour-change interventions. BMC Public Health. 2015; 15: 963. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoffmann TC, Glasziou PP, Boutron I, et al.: Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014; 348: g1687. PubMed Abstract | Publisher Full Text\n\nRöhricht F, Gallagher S, Geuter U, et al.: Embodied cognition and body psychotherapy: The construction of new therapeutic environments. Sensoria: A Journal of Mind, Brain & Culture. 2014; 10(1): 11–20. Publisher Full Text\n\nRöhricht F: Body oriented psychotherapy. The state of the art in empirical research and evidence-based practice: A clinical perspective. Body Mov Dance Psychother. 2009; 4(2): 135–56. Publisher Full Text\n\nRöhricht F, Priebe S: Effect of body-oriented psychological therapy on negative symptoms in schizophrenia: a randomized controlled trial. Psychol Med. 2006; 36(5): 669–78. PubMed Abstract | Publisher Full Text\n\nRöhricht F, Elanjithara T: Management of medically unexplained symptoms: outcomes of a specialist liaison clinic. Psychiatr Bull (2014). 2014; 38(3): 102–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaul E, Fischer M: Einführung in die Integrative Körperpsychotherapie IBP (Integrative Body Psychotherapy). Bern: Hogrefe; 2016. Reference Source\n\nVuotto SC, Ojha RP, Li C, et al.: The role of body image dissatisfaction in the association between treatment-related scarring or disfigurement and psychological distress in adult survivors of childhood cancer. Psychooncology. 2017. PubMed Abstract | Publisher Full Text\n\nFingeret MC, Teo I, Epner DE: Managing body image difficulties of adult cancer patients: lessons from available research. Cancer. 2014; 120(5): 633–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnöbohm C, Friedrichsen M, Heiwe S: Experiencing one's body after a diagnosis of cancer--a phenomenological study of young adults. Psychooncology. 2010; 19(8): 863–9. PubMed Abstract | Publisher Full Text\n\nSekse RJ, Gjengedal E, Råheim M: Living in a changed female body after gynecological cancer. Health Care Women Int. 2013; 34(1): 14–33. PubMed Abstract | Publisher Full Text\n\nLindwall L, Bergbom I: The altered body after breast cancer surgery. Int J Qual Stud Heal. 2009; 4(4): 280–7. Publisher Full Text\n\nErvik B, Asplund K: Dealing with a troublesome body: a qualitative interview study of men's experiences living with prostate cancer treated with endocrine therapy. Eur J Oncol Nurs. 2012; 16(2): 103–8. PubMed Abstract | Publisher Full Text\n\nEsser P, Mehnert A, Johansen C, et al.: Body image mediates the effect of cancer-related stigmatization on depression: A new target for intervention. Psychooncology. 2017. PubMed Abstract | Publisher Full Text\n\nBoquiren VM, Esplen MJ, Wong J, et al.: Sexual functioning in breast cancer survivors experiencing body image disturbance. Psychooncology. 2016; 25(1): 66–76. PubMed Abstract | Publisher Full Text\n\nBloch S, Crouch E, Reibstein J: Therapeutic factors in group psychotherapy. A review. Arch Gen Psychiatry. 1981; 38(5): 519–26. PubMed Abstract | Publisher Full Text\n\nKersting A, Reutemann M, Staats H, et al.: [Therapeutic factors of outpatient group psychotherapy - the predictive validity of the Group Experience Questionnaire (GEQ)]. Psychother Psychosom Med Psychol. 2002; 52(7): 294–301. PubMed Abstract | Publisher Full Text\n\nNevonen L, Broberg AG: A comparison of sequenced individual and group psychotherapy for patients with bulimia nervosa. Int J Eat Disord. 2006; 39(2): 117–27. PubMed Abstract | Publisher Full Text\n\nKellett S, Clarke S, Matthews L: Delivering group psychoeducational CBT in Primary Care: comparing outcomes with individual CBT and individual psychodynamic-interpersonal psychotherapy. Br J Clin Psychol. 2007; 46(Pt 2): 211–22. PubMed Abstract | Publisher Full Text\n\nO'Shea G, Spence SH, Donovan CL: Group versus individual interpersonal psychotherapy for depressed adolescents. Behav Cogn Psychother. 2015; 43(1): 1–19. PubMed Abstract | Publisher Full Text\n\nKnight L, Mussell M, Brandl T, et al.: Development and psychometric evaluation of the Basic Documentation for Psycho-Oncology, a tool for standardized assessment of cancer patients. J Psychosom Res. 2008; 64(4): 373–81. PubMed Abstract | Publisher Full Text\n\nMehnert A, Mueller D, Lehmann C, et al.: The German version of the NCCN distress thermometer: validation of a screening instrument for assessment of psychosocial distress in cancer patients. Zeitschrift für Psychiatrie Psychologie und Psychotherapie. 2006; 54(3): 213–23.\n\nSwami V, Stieger S, Haubner T, et al.: German translation and psychometric evaluation of the Body Appreciation Scale. Body Image. 2008; 5(1): 122–7. PubMed Abstract | Publisher Full Text\n\nHausmann B, Neddermeyer R: BewegtSein: Integrative Bewegungs- und Leibtherapie; Erlebnisaktivierung und Persönlichkeitsentwicklung (zeitpunkt musik). Wiesbaden: Reichert Verlag; 2011. Reference Source\n\nHöhmann-Kost A: Bewegung ist Leben: Integrative Leib-und Bewegungstherapie – eine Einführung. Bern Göttingen: Hans Huber Verlag; 2002. Reference Source\n\nLangewitz W: Leib und Körper in der Psychotherapie. PiD-Psychotherapie im Dialog. 2016; 17(1): 22–8. Publisher Full Text\n\nWaibel MJ, Jacob-Krieger C, editors: Integrative Bewegungstherapie: Störungsspezifische und ressourcenorientierte Praxis. Stuttgart: Schattauer; 2008. Reference Source\n\nOken MM, Creech RH, Tormey DC, et al.: Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am J Clin Oncol. 1982; 5(6): 649–55. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "25718",
"date": "15 Sep 2017",
"name": "Ulrich Sollmann",
"expertise": [
"Reviewer Expertise As a body-psychotherapist I work beside others with cancer-patients for more than 25 years already. I usually publish case-studies in the field of qualitative research. Research is done by my own or together with colleagues under certain perspectives. My first publication on body-psychotherapy in a group setting with cancer-patients was published in 1990. I do research in Germany as well as in China."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGoal, structure and results of the report are described very clearly. The report is comprehensible, understandable and verifiable. The results include enough data about the socio-demographic aspects in medical history etc. The results mirror also, and this is very important, individual and personal statements which are related to the individual sessions in the group as well as to the individual patients and their concerns. The combination of both kinds of results seems to be a helpful illustration so that somebody who didn’t join the group can imagine better what the patients experienced in the group; personally and bodily. This also gives a deep inside into the therapist`s perspective, the therapist’s awareness and perception.\nThe structure (six sessions in the group) is short and long enough to establish a process of body self-experience. The group-setting invites to get a really competent overview of what had happened and why it happened.\nI guess that the distinction between positive effects on the group-level in comparison with the individually perceived treatment could be very interesting for future research.\nIt’s remarkable that the authors refer elaborated on various literature of the field of body-psychotherapy. This could be an important step to better integrate body-psychotherapy in the field of psychosomatic medicine. What is urgently needed.\nThere are some remarks I want to add and which could be considered in future research:\nIt could be helpful to define the specific concept of body-psychotherapy in the group and thus included relevance of psycho-educative aspects.\n\nIt would be very helpful to get a better understanding of a specific concept of body-psychotherapy which was used in the group. The field of body-psychotherapy offers a very big variety of approaches in body-psychotherapy. It could be helpful to better understand the background. This understanding could be necessary to better understand and interpret the results of such a study.\n\nAccording to the literature on working with body-psychotherapy in the group with cancer-patients there is more literature being available of course for many years. It could be interesting to discuss the results of such a study on the background of scientific research in the period of the last 25 years.\n\nIs the background of the cases’ history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the conclusion balanced and justified on the basis of the findings? Yes",
"responses": [
{
"c_id": "3923",
"date": "23 Aug 2018",
"name": "Gunther Meinlschmidt",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Dear Dr. Sollmann, We thank you very much for providing us with insightful and constructive reviewer comments on our article. We would like to take the opportunity to reply to your comments and provide a revised version. We considered all points raised and include our point-by-point responses below: We thank the reviewer for the overall positive feedback Comment 1: It could be helpful to define the specific concept of body-psychotherapy in the group and thus included relevance of psycho-educative aspects. We agree with the reviewer that more information on the interventions and exercises would be an asset. We therefore plan to publish an in depth description of the intervention as separate publication in the upcoming year. We thereby intend to take advantage of F1000 articles as ‘living’ („even after peer review is complete: Authors can ‘update’ their articles at any time (and at no extra charge) if there have been small developments relevant to the findings“) and will insert into the article a link to this separate publication as soon as possible. Comment 2: It would be very helpful to get a better understanding of a specific concept of body-psychotherapy which was used in the group. The field of body-psychotherapy offers a very big variety of approaches in body-psychotherapy. It could be helpful to better understand the background. This understanding could be necessary to better understand and interpret the results of such a study. The concept of this group body psychotherapy approach is rooted in the integrative body psychotherapy (IBP) movement, conceptualized by Jack Lee Rosenberg. The IBP approach is described in more detail in reference 42. In the revised version of the manuscript, we have added an additional reference (Rosenberg, J.L., Rand, M. L. and Asay, D. (1985); Body, Self, and Soul. Sustaining Integration. Humanics Limited, Humanics New Age; Atlanta, Georgia) in which the origins are further outlined. Comment 3: According to the literature on working with body-psychotherapy in the group with cancer-patients there is more literature being available of course for many years. It could be interesting to discuss the results of such a study on the background of scientific research in the period of the last 25 years. We thank the reviewer for pointing this out and expanded the discussion section: “ these findings are in line with previous reports (e.g. 64) “"
}
]
},
{
"id": "25724",
"date": "10 Oct 2017",
"name": "Manfred Thielen",
"expertise": [
"Reviewer Expertise I am from my scientific background Dr. phil. and Dipl.-Psych. and lecturer at the university of Magdeburg-Stendal. I have written numerous scientific articles in the field of Body Psychotherapy and be editor of some books to Body Psychotherapy. I work since 1984 as Body Psychotherapist and be director of the Institut für Körperpsychotherapie Berlin where trainings in Body Psychotherapy take place. I am the president of the German Society of Body Psychotherapy (Deutsche Gesellschaft für Körperpsychotherapie",
"DGK). I have organized a congress to the theme: Body Psychotherapy in groups 2011 and written an article: “ The body in the filed of the group” ((Thielen",
"M. ). Der Körper im Feld der Gruppe. Charakteristika der Körpergruppenpsychotherapie. In Thielen",
"M. (Hrsg.)",
"(2013). Körper-Gruppe-Gesellschaft. Neue Entwicklungen in der Körperpsychotherapie. Gießen: Psychosozial-Verlag))."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is a clearly structured report on the results of the study and as such is both coherent and comprehensible. The scientific framework and the socio-demographic aspects of the study are exactly specified.\nThe findings of the body psychotherapeutic group therapy with cancer patients are exceedingly interesting, particularly as the group therapy was a short-term therapy of only 6 sessions. It is clear from the combined, differentiated statements of the group participants that all of them have profited from the therapy, especially from the body psychotherapeutic approach. They have all attained new insights into their perceptions of body sensations and the relationship between thoughts, feelings and body activities. 5 of the 6 patients stated that their awareness of their physical and emotional borders had improved and that they are therefore able to reduce stress in their daily lives.\nThe study shows impressively and probably for the first time, that this body psychotherapeutic group therapy with cancer patients has had very positive results.\nOf course these first positive results will have to be confirmed and elaborated by further studies. Since Wilhelm Reich (1948, 2001) there have been several approaches in body psychotherapy to the study of the origins and therapy of cancer, but the aspect of group therapy has not until now been examined. This study does therefore have an important pioneering aspect.\nIn the light of the its merits my critical comments of the study are of minor significance. But I would have wished that the exact interventions and exercises, and their origins in which body psychotherapy approach, had been specified. It would also have been interesting to learn how the patients reacted to specific body psychotherapy interventions.\nAlso I have some questions to the short-term-setting of the study. From my own experience over many years as leader of body psychotherapeutic groups, I see groups as having a more sustained long term effect if they take place over 2-3 years than if they are short-term.\n\nIs the background of the cases’ history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the conclusion balanced and justified on the basis of the findings? Yes",
"responses": [
{
"c_id": "3922",
"date": "23 Aug 2018",
"name": "Gunther Meinlschmidt",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Dear Dr. Thielen, We thank you very much for providing us with insightful and constructive reviewer comments on our article. We would like to take the opportunity to reply to your comments and provide a revised version. We considered all points raised and include our point-by-point responses below: Comment 1: “In the light of the its merits my critical comments of the study are of minor significance….” We thank the reviewer for the overall positive feedback. Comment 2: “I would have wished that the exact interventions and exercises … had been specified.” We agree with the reviewer that more information on the interventions and exercises would be an asset. We therefore plan to publish an in depth description of the intervention as separate publication in the upcoming year. We thereby intend to take advantage of F1000 articles as ‘living’ („even after peer review is complete: Authors can ‘update’ their articles at any time (and at no extra charge) if there have been small developments relevant to the findings“) and will insert into the article a link to this separate publication as soon as possible. Comment 3: “I would have wished that (…) their origins in which body psychotherapy approach, had been specified.” The concept of this group body psychotherapy approach is rooted in the integrative body psychotherapy (IBP) movement, conceptualized by Jack Lee Rosenberg. The IBP approach is described in more detail in reference 42. In the revised version of the manuscript, we have added an additional reference (Rosenberg, J.L., Rand, M. L. and Asay, D. (1985); Body, Self, and Soul. Sustaining Integration. Humanics Limited, Humanics New Age; Atlanta, Georgia) in which the origins are further outlined. Comment 4: “It would also have been interesting to learn how the patients reacted to specific body psychotherapy interventions” We completely agree with the reviewer that information on how the patients reacted to specific body psychotherapy interventions would be highly interesting. Unfortunately, we didn’t collect respective data this time, but intend to do so in future studies. We added a respective sentence in the discussion section of our manuscript: “We did not systematically assess how patients reacted to specific body psychotherapy interventions, which however would have been highly interesting, for example with regard to inform current efforts to develop more modular, individualized/personalized modular interventions.” Comment 5: “Also I have some questions to the short-term-setting of the study. From my own experience over many years as leader of body psychotherapeutic groups, I see groups as having a more sustained long term effect if they take place over 2-3 years than if they are short-term.” We fully agree that whether short- or long-term body psychotherapy interventions for cancer patients show better and more sustained effects is a very interesting question, which however goes beyond the scope of this project. We added a respective sentence in the discussion section, highlighting that this is a relevant issue for further studies: “Notably, we here provide information on short-term body psychotherapy. Comparing the effects and their sustainability of short- and long-term body psychotherapy should be addressed in future studies.”"
}
]
}
] | 1
|
https://f1000research.com/articles/6-1646
|
https://f1000research.com/articles/7-536/v1
|
03 May 18
|
{
"type": "Software Tool Article",
"title": "WordCommentsAnalyzer: A windows software tool for qualitative research",
"authors": [
"Ehsan Abdekhodaie",
"Javad Hatami",
"Hadi Bahrami Ehsan",
"Reza Kormi-Nouri",
"Ehsan Abdekhodaie",
"Hadi Bahrami Ehsan",
"Reza Kormi-Nouri"
],
"abstract": "There is a lack of free software that provides a professional and smooth experience in text editing and markup for qualitative data analysis. Word processing software like Microsoft Word provides a good editing experience, allowing the researcher to effortlessly add comments to text portions. However, organizing the keywords and categories in the comments can become a more difficult task when the amount of data increases. We present WordCommentsAnalyzer, a software tool that is written in C# using .NET Framework and OpenXml, which helps a qualitative researcher to organize codes when using Microsoft Word as the primary text markup software. WordCommentsAnalyzer provides an effective user interface to count codes, to organize codes in a code hierarchy, and to see various data extracts belonging to each code. We illustrate how to use the software by conducting a preliminary content analysis on Tweets with the #successfulaging hashtag. We hope this open-source software will facilitate qualitative data analysis by researchers who are interested in using Word for this purpose.",
"keywords": [
"Computer assisted qualitative data analysis software",
"Microsoft Word",
"comments",
"coding",
"thematic analysis",
"code hierarchy tree"
],
"content": "Introduction\n\nCommercial qualitative data analysis (QDA) software tools such as NVivo and Atlas.ti seem to be the most popular in the qualitative research community1. However, learning to use these complex software tools may be inconvenient for some researchers. Moreover, the purchase of commercial QDA software may not be affordable for some researchers. On the other hand, free or open-source solutions that are available often do not provide a smooth editing and markup experience (e.g., QDA Miner Lite does not support Persian and Arabic languages; CATMA and CAT2 are not fast due to their web-based nature). For these reasons, some researchers use professional word processing programs for their qualitative research projects.\n\nThe use of Microsoft Word for QDA is commonly documented3,4. Using Word comments provides a straightforward way to annotate specific portions of the text and attach keywords or categories (codes) to them. However, as the amount of data grows, organizing codes in Word comments becomes an exhausting task.\n\nIn this article, we present WordCommentsAnalyzer, a free, open-source tool that makes it possible for qualitative researchers to automate organization of the qualitative codes through a fast and easy-to-learn user interface while coding the textual material using Microsoft Word as a professional, familiar word procesing software.\n\n\nMethods\n\nThis software is written in C# programming language using .NET Framework 4.5.2. The software also makes use of OpenXml library to extract comments from Word documents. Recent versions of Word store documents in XML format. OpenXml provides an easy way to query comments from a document. To facilitate assigning multiple codes to a piece of text, we assume a simple convention: different codes are entered in a comment with line breaks between them (as the descendant paragraphs of the comment element). The software uses a relational model approach to store the extracted codes and uses language integrated queries to collect different text portions related to each code, to calculate the code frequencies and to sort the codes by frequency. The visual interface of the program consists of three side-by-side panels (Figure 1). The left panel shows the codes in the comments with their counts, the middle one provides a code tree that the user can intuitively organize their codes in and the right panel shows the data extracts pertaining to each code. In the left panel, the code list can be filtered to find specific codes. The user can place codes in the code hierarchy simply by using drag-and-drop. The tree also enables the user to move codes in the hierarchy if needed. The user can introduce a new parent code or a code that is of a higher level of abstraction. Additionally, the codes are changed or combined by being wrapped in new codes. The code hierarchy tree is saved as a tab-indented text file in the data folder (codehierarchy.txt). The tree is auto-saved every minute and can also be manually saved by clicking a save button in the interface. The previous tree files are backed up in a subfolder of the data folder.\n\nThe left panel shows the codes in the comments with their counts, the middle panel provides a code tree for intuitive organization of the codes and the right panel shows the data extracts pertaining to each code (or to children of a parent code). The code list in the left panel can be filtered to find specific codes. The user can place codes in the code hierarchy simply by using drag-and-drop. The tree also enables the user to move codes in the hierarchy if needed. The user can introduce a new parent code. The codes are changed or combined by being wrapped in new codes.\n\nThe requirements for this software are Windows 7 or later and .NET Framework 4.5.2. After installing the .NET Framework, the user can unzip the release package from the GitHub link and run the “WordCommentsAnalyzer.exe” executable file. The program supports XML Word documents (using the .docx extension). Older Word documents (using the .doc extension) can be easily converted to XML documents by Word 2003 or later (there are also resources available on the web to batch-convert older Word documents). The program allows multiple Word files to be analyzed. This feature can be utilized to separate transcripts of different interview or focus group sessions into different files.\n\n\nUse case\n\nTo illustrate how to use the software, we present a mini-study of Twitter’s Tweets from 17 January 2017 to 10 April 2018. The Tweets with the #successfulaging hashtag were copied into two Word documents based on the year in which the Tweets were posted (Supplementary File 1). We reviewed the Tweets and added comments (line-break-separated codes) to portions of texts containing interesting notions related to successful aging. Two examples of these text portions are reproduced in Figure 2.\n\nThe codes describe notable topics concerning the text samples.\n\nAfter adding comments to Word documents, we run WordCommentsAnalyzer, select the folder containing the Word documents and click the Analyze button. The program analyzes the comments and shows a list of codes with their counts in the left panel. The middle panel enables us to organize the codes by placing them in a code hierarchy (Figure 3). For example, we can find a number of codes related to health by filtering the code list by the word of “health”. Then we add the code of “Health”, which is a parent code, to the hierarchy by dragging and dropping it onto the root node of “Code Hierarchy”. The codes of “Brain health”, “Physical health”, and “Health care” can then be drag-and-dropped onto the node of “Health”. Likewise, “Oral health” is inserted into “Physical health”. When organizing the codes, we could check the right panel to assure the data extracts support the codes. Also, the codes inserted into the hierarchy will be highlighted in the code list to help keep track of the organized codes.\n\nThe user can find specific codes by filtering the code list (e.g., by the word of “health”) and organize the codes (from the left panel) by dragging and dropping them into the code hierarchy tree (the right panel).\n\nFigure 4 presents a formatted version of codehierarchy.txt (Supplementary File 2) when we organized the Tweet codes with at least two counts. As shown in this figure, the themes of health, retirement, happiness and being active represent the richest themes in the Tweets of #successful aging.\n\nWhen we organized the Tweet codes with at least two counts. The large branches of the code tree can help the researcher identify the richest themes in the data. Thus, themes of health, retirement, happiness, and being active are probably the major themes in the Tweets with the hashtag #successfulaging.\n\n\nConclusion\n\nThis article presents a Windows software tool for organizing comments in Word documents. WordCommentsAnalyzer facilitates organizing codes in a code hierarchy for qualitative researchers who are interested in using Word documents to annotate their data.\n\n\nSoftware availability\n\nSource code available from: https://github.com/ehsabd/word-comments-analyzer.\n\nArchived source code at time of publication: https://doi.org/10.5281/zenodo.12286045.\n\nLicense: GNU General Public License 3.0.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1. Tweets hashtagged with #successfulaging from 17 January 2017 to 10 April 2018.\n\nClick here to access the data.\n\nSupplementary File 2. The tab-indented text file of code hierarchy.\n\nClick here to access the data.\n\n\nReferences\n\nLewis RB: NVivo 2.0 and ATLAS.ti 5.0: A Comparative Review of Two Popular Qualitative Data-Analysis Programs. Field Methods. 2004; 16(4): 439–64. Publisher Full Text\n\nLu C-J, Shulman SW: Rigor and flexibility in computer-based qualitative research: Introducing the Coding Analysis Toolkit. Int J Mult Res Approaches. 2008; 2(1): 105–17. Publisher Full Text\n\nChenail RJ, Duffy M: Utilizing Microsoft® Office to produce and present recursive frame analysis findings. Qual Rep. 2011; 16(1): 292. Reference Source\n\nLa Pelle N: Simplifying qualitative data analysis using general purpose software tools. Field Methods. 2004; 16(1): 85–108. Publisher Full Text\n\nAbdekhodaie E: WordCommentsAnalyzer: A windows software tool for qualitative research (Version 2.0.2.1). Zenodo. 2018. Data Source"
}
|
[
{
"id": "34052",
"date": "11 Jun 2018",
"name": "Ronggui Huang",
"expertise": [
"Reviewer Expertise Sociology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nVarious QDA software provide similar functionalities in terms of coding operations and organization of codes, for instance, RQDA (http://rqda.r-forge.r-project.org/), WeftQDA (https://www.pressure.to/qda/), Py3QDA (https://github.com/Ronggui/PyQDA/), among others. A more systematic comparison of existing tools and WordCommentsAnalyzer will provide a clear picture on the relative advantages and disadvantages of the latter.\nIt seems that WordCommentsAnalyzer mainly organizes the codes and shows the related coded text segments, but does not support coding operations on-the-fly. The operations of coding, remove coding, and re-coding have to be conducted on the Word Processor side. It would be helpful to potential users to describe this point clearly.\nSince the operation of coding must be done with Word Processor, it seems that coders have to remember and type the names of codes directly or via the copy-and-paste method. It would be valuable if coders can do the coding via WordCommentsAnalyzer.\nThe most obvious advantage of WordCommentsAnalyzer is its easy use, especially for Windows users.\nOverall, WordCommentsAnalyzer is a valuable new tool for organizing codes based on Word Processor.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3931",
"date": "04 Sep 2018",
"name": "Ehsan Abdekhodaie",
"role": "Author Response",
"response": "We would like to thank Dr. Huang for reviewing our software/manuscript and for his kind comments about the potential value this software provides. Here we respond to the reviewer‘s comments as follows: We added a ‘more systematic comparison’ of existing tools and WordCommentsAnalyzer through a table which included three commercial QDA tools and RQDA along with WordCommentsAnalyzer. We also discussed the relative pros and cons of the program. We asserted that our program is not meant to do coding operations on-the-fly and these operations should be done on the word-processing side. The reviewer mentioned the concern that it may be difficult for the researcher to memorize and type the names of codes or copy-and-paste them into Word. We added a feature that enables the user to drag and drop codes (from either the code list or the hierarchy tree) into the Word comment. So the user can both remain on the word-processing side for editing the files and reuse the developed code list/hierarchy effectively."
}
]
},
{
"id": "36504",
"date": "02 Aug 2018",
"name": "Yazdan Mansourian",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article reports an overall description about developing a free and open source software designed for qualitative data analysis. This software assists researchers to organise and categorise the initial codes they create through data analysis process in various qualitative methods such as thematic analysis, ethnography and grounded theory. As this software is free and easily accessible, it will be useful for many researchers who do not have access to more sophisticated tools.\nThe authors highlighted the ease of use and free access of this tool as its main benefits and I do agree with them. Nonetheless, they did not compare the capabilities and performance of this software with one or two well-known tools in this field to explain its strengths and weaknesses. Obviously, the existing tools have several options for deeper level of analysis and also some facilities for analysis of non-text data such as image, audio and video. As a result, I think the authors should also remind the reader about the inevitable limitations of this product. I also recommend the following revisions in the article:\n1. The article should be more informative in terms of distinctive features of this software comparing to similar tools in this context.\n2. The first citation in the introduction section (Lewis, 2004) has been published 14 years ago while since then the area of qualitative data analysis (QDA) has been developed considerably. As a result, I recommend to include a few recent citations in this section to provide the reader with a more accurate image of the current trends and issues in this area.\n3. The case presented in this article is based on a relatively small collection of data. Accordingly, we still do not know how effective the software will be in larger data sets. As sometimes, the huge volume of the data in large scale studies may reduce the performance of a software.\n4. The authors mentioned popular qualitative data analysis tools such as QDA Miner Lite does not support Persian and Arabic languages. Nonetheless, the case presented in the article is English and we still do not know about the real performance of this new tool in other languages such as Persian and Arabic.\n5. The conclusion is too brief at this stage and it should be more detailed and insightful. A good conclusion summaries the main points of the article and invites the reader to think further about the focal point presented in the discussions.\nIn general, this article is based on a creative idea but still requires some revisions and I hope the software presented here will be useful for researchers who use qualitative approach in their studies.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": [
{
"c_id": "3930",
"date": "04 Sep 2018",
"name": "Ehsan Abdekhodaie",
"role": "Author Response",
"response": "We thank Dr. Mansourian for his valuable comments about our manuscript/software. The comments surely helped us improve both the software and the manuscript. Some aspects of the software have been updated at the time we present this revised manuscript. Indeed, apart from the tweaks done in the software performance so that it can handle larger datasets, comparison of the software features to the features of other well-known QDA software compelled us to add some useful features to our software including search in code hierarchy feature and visualization tools. We also specifically respond to the reviewers comments in the following: 1. We compared the features of this software to the similar tools in a new Table in the manuscript and added an accompanying paragraph which highlights the main points regarding this comparison and describes the specific limitations of our software. We also mentioned the limitations of WordCommentsAnalyzer regarding analysis of non-textual materials, doing complex queries, or presenting sophisticated visualizations. 2. We added a few citations. Specifically, one to provide more recent data on the usage of popular QDA software (Woods, Paulus, Atkins and Macklin, 2016) and others to give the readers a glance on current challenges in learning complex QDA software (Silver and Rivers, 2014; Woods, Macklin, and Lewis, 2016). 3, 4. The software indeed had issues with large dataset as the reviewer suspected. Consequently, we have updated the software so that it can handle larger datasets more efficiently. Then, we added another use case that is based on a collection of Iranian journal abstracts to test the software performance on a large dataset. This way we could also address the reviewer’s comment about performance of the tool in the presence of Persian/Arabic texts. It is worthy to note that, we developed the feature of search in the hierarchy and move in the hierarchy since we realized that it is not feasible for large datasets to find the codes by scrolling the hierarchy tree or to move the codes by merely dragging and dropping. 5. We tried to make the conclusion more informative and more linked to the rationale of the study. Also, we revised it so as to present a better summary of the points made in the article."
}
]
}
] | 1
|
https://f1000research.com/articles/7-536
|
https://f1000research.com/articles/7-1406/v1
|
04 Sep 18
|
{
"type": "Brief Report",
"title": "Antimicrobial activity of Terminalia catappa brown leaf extracts against Staphylococcus aureus ATCC 25923 and Pseudomonas aeruginosa ATCC 27853",
"authors": [
"Ovin Qonita Allyn",
"Eko Kusumawati",
"Rudy Agung Nugroho",
"Ovin Qonita Allyn",
"Eko Kusumawati"
],
"abstract": "The aim of this study was to determine the effects of various concentration of Terminalia catappa brown leaves extract which can inhibit the growth of Staphylococcus aureus ATCC 25923 and Pseudomonas aeruginosa ATCC 27853. The crushed-brown leaves of Terminalia catappa was extracted using 95% ethanol, filtered, and evaporated. The dried T. catappa extract was used to identify phytochemical content qualitatively. Total phenolic and flavonoid contents were also measured quantitatively from dried extract. The dried extracts were also dissolved in sterile aquadest and serial dilutions were prepared to final concentration of 30, 60 and 90%. A disc diffusion method was used to evaluate the antibacterial activity of various concentrations of ethanol extract of brown leaves of T. catappa. Inhibition zone diameter was measured to determine antibacterial activity. Gentamycin sulfate and distilled water were used as positive and negative controls, respectively. Dried ethanolic extract of brown T. catappa leaves contained flavonoid, quinon, phenolic, triterpenoid, and tannin. A total of 208.722 mg gallic acid equivalent/g extract of total phenolic and 35.7671 mg quercetin equivalent/g extract of total flavonoid were also found in the dried extract. The inhibition zone diameters of ethanolic extracts ranged from 1.73 to 9.06 mm (S. aureus) and from 1.83 to 6.5 mm (P. aeruginosa). The higher concentration of extract, the wider the inhibition zone diameters for both bacteria. P. aeruginosa was more resistant to high concentrations of extract (90%) than S. aureus. Ethanolic extracts of the brown leaves of T. catappa had different antibacterial effects against S. aureus and P. aeruginosa. The higher the concentration of extract, the wider the inhibition zone diameter for both bacteria. P. aeruginosa was more resistant to high concentrations of ethanolic extracts of the brown leaves of T. catappa.",
"keywords": [
"Terminalia catappa",
"phytochemicals",
"Staphylococcus aureus",
"Pseudomonas aeruginosa",
"antibacterial"
],
"content": "Introduction\n\nStaphylococcus and Pseudomonas species have been identified as causative agents of disease and serious pathogens in many aquatic animals, including fish1–4, resulting in high mortality rates in many commercially farmed fish. Among various Staphylococcus and Pseudomonas species, Staphylococcus aureus and Pseudomonas aeruginosa are known to cause disease in Oreochromis niloticus and Oreochromis mossambicus5. To reduce high mortality rates in farmed fish, aquaculturists and researcher used chemical agents and antibiotics to promote growth or prevent S. aureus and P. aeruginosa infection6.\n\nHowever, the use of antibiotics to prevent and cure common infectious diseases in fish is becoming increasingly limited due to environmental concern, and increasingly expensive and ineffective because of microbial resistance7–9. As alternatives, various plant extracts, such as those of Boesenbergia pandurata, Zingiber zerumbet and Solanum ferox, have been tested and used as an alternative to antibiotics10–12. Another potential plant extract that can be used as an antimicrobial is that of Terminalia catappa, which is widely distributed in tropical and sub-tropical regions, including Indonesia13,14.\n\nTerminalia catappa L., belonging to the family Combretaceae, is a large deciduous tree. The aqueous extract of Terminalia catappa leaves has been known as a folk medicine for antipyretic, hemostatic, hepatitis and liver-related diseases purposes in the Philippines, Malaysia and Indonesia15,16. Past research revealed that the extract of T. catappa leaves can be used to improve a resistance to Aeromonas hydrophila in Betta sp17, remedy against tilapia (Oreochromis niloticus) parasites and bacterial pathogen18,19. Nevertheless, scientific literature concerning the antibacterial potency of T. catappa against Staphylococcus aureus and Pseudomonas aeruginosa is limited.\n\nThus, the aim of the study was to evaluate the effects of various concentration of T. catappa brown leaf extract on the growth of Staphylococcus aureus and Pseudomonas aeruginosa by calculating inhibition zone diameters. The phytochemical content of the extract was also qualitatively determined and the flavonoid and phenolic concentrations in the extract was quantified.\n\n\nMethods\n\nThe research was performed from March to May 2018 at the Animal Physiology, Development and Molecular Laboratory for extracting T. catappa leaves. Meanwhile, assay study was done at microbiology and molecular genetic laboratory.\n\nBacterial strains were obtained from a Microbiology Laboratory, Faculty of Pharmacy, Sumatera Utara University, Indonesia. Staphylococcus aureus ATCC 25923 and Pseudomonas aeruginosa ATCC 27853 were used to investigate the antibacterial activity. Both bacteria were sub-cultured on nutrient agar and stored at 4°C until use.\n\nBrown leaves of T. catappa were collected from a region of Mulawarman university campus, Samarinda, East Kalimantan. Leaves were dried at room temperature for 2 days, crushed, transferred into a glass container and preserved until the extraction procedure.\n\nApproximately 1 kg of crushed leaves was soaked in 1 l of 95% ethanol for 5 days and shaken occasionally with a shaker. After 5 days, materials were filtered (Whatman No. 11 paper filter). The filtrate was evaporated using a rotary evaporator. Finally, the dried extracts were obtained and stored at 4°C in a dark bottle until use. The dried extracts were then dissolved in sterile distilled water and serial dilutions were prepared to give final concentrations of 30, 60 and 90%.\n\nDried extract samples were subjected to qualitative phytochemical analysis for flavonoids, quinon, alkaloids, phenolic, steroid, triterpenoid, saponins, and tannins using standard methods as previously described by Nugroho et al.20. Meanwhile, total phenolics and flavonoids were quantitatively measured, using the method described by Pourmorad et al.21.\n\nThe antibacterial activity of T. catappa brown leaf ethanolic extract was evaluated using the disc diffusion method22. Three replicated agar plates were used for each different concentration and both controls (distilled water and 0.1% gentamycin sulfate). A total of 10 µl extract was added to a paper disc for each concentration and controls. Each disk was then placed in agar plate which had bacterial suspension in the plates. All plates were incubated at 37°C for 24 h. The diameter of inhibition zone created by each disc was measured (in mm) using a micrometer.\n\nThe inhibition zone data were expressed as means ± standard error. The data were subjected to ANOVA, followed by Duncan’s post hoc test to evaluate significant differences among the groups of treatments. Meanwhile, the comparison between bacteria in each concentration was performed using a t-test. All significant tests were at P<0.05 levels and all analysis was done using SPSS 22 (SPSS, Inc., USA). The data of the phytochemical content and the concentration of flavonoid and phenolic were analyzed descriptively.\n\n\nResults and discussion\n\nThe dried extract of T. catappa brown leaves contained flavonoids, quinon, phenolics, triterpenoids, and tannins. There were no alkaloids, steroids or saponin found in the dried extract. Total phenolic (208.722 mg gallic acid equivalent/g extract) and total flavonoid (35.7671 mg quercetin equivalent/g extract) were detected in the dried extract. The inhibition zone diameters of ethanolic extracts ranged from 1.73 to 9.06 mm for S. aureus, and from 1.83 to 6.5 mm for P. aeruginosa. Increasing the extract concentration increased the inhibition zone diameters for both bacteria (Figure 1). P. aeruginosa was more resistant to high concentrations of extract (90%) than S. aureus (Table 1). According to Xie et al.23, flavonoids are known antibacterial agents against a wide range of pathogenic bacteria. In addition, Fu et al.24 also revealed that phenolic extracts from some plants also have antibacterial effects against many kinds of bacteria. The data showing the inhibition zone diameters for both bacteria at each concentration of extract can be seen in Dataset 125.\n\n(a) Negative control, (b) positive control (0.1% gentamycin sulfate), Terminalia catappa extract (c) 30%, (d) 60%, (e) 90%. Images shown are representative of n=3 repeats.\n\nDifferent superscript letters in the same row indicate significantly different mean values for different treatments at P<0.05. Different superscript numbers in the same column indicate significantly different mean values for different treatments at P<0.05. The negative control was omitted as no inhibition zone was present. Positive control, 0.1% gentamycin sulfate.\n\n\nConclusion\n\nEthanolic extracts of the brown leaves of T. catappa have potential antibacterial effects against S. aureus and P. aeruginosa, indicated by the inhibition zone formed around the extract. The inhibition zone diameter increased with increasing concentrations of T. catappa extract. P. aeruginosa exhibited more resistance to high concentrations of ethanol extracts of the brown leaves of T. catappa than S. aureus.\n\n\nData availability\n\nDataset 1. Inhibition zone diameters for both bacteria at different concentration of extracts and images of every repeat experiment performed. DOI: https://doi.org/10.5256/f1000research.15998.d21516925.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors thank to Faculty of Mathematics and Natural Sciences, Mulawarman University, Samarinda, East Kalimantan. The appreciation goes to all of our students who helped the authors during the trial.\n\n\nReferences\n\nNajiah M, Aqilah NI, Lee KL, et al.: Massive mortality associated with Streptococcus agalactiae infection in cage-cultured red hybrid tilapia Oreochromis niloticus in Como River, Kenyir Lake, Malaysia. J Biol Sci. 2012; 12(8): 438–442. Publisher Full Text\n\nNa-Phatthalung P, Chusri S, Suanyuk N, et al.: In vitro and in vivo assessments of Rhodomyrtus tomentosa leaf extract as an alternative anti-streptococcal agent in Nile tilapia (Oreochromis niloticus L.). J Med Microbiol. 2017; 66(4): 430–439. PubMed Abstract | Publisher Full Text\n\nSaikia DJ, Chattopadhyay P, Banerjee G, et al.: Time and dose dependent effect of Pseudomonas aeruginosa infection on the scales of Channa punctata (Bloch) through light and electron microscopy. Turk J Fish Aquat Sci. 2017; 17(5): 871–876. Publisher Full Text\n\nBaldissera MD, Souza CF, Santos RCV, et al.: Pseudomonas aeruginosa strain PA01 impairs enzymes of the phosphotransfer network in the gills of Rhamdia quelen. Vet Microbiol. 2017; 201: 121–125. PubMed Abstract | Publisher Full Text\n\nThomas J, Thanigaivel S, Vijayakumar S, et al.: Pathogenecity of Pseudomonas aeruginosa in Oreochromis mossambicus and treatment using lime oil nanoemulsion. Colloids Surf B Biointerfaces. 2014; 116: 372–377. PubMed Abstract | Publisher Full Text\n\nGrema HA, Geidam YA, Suleiman A, et al.: Multi-Drug Resistant Bacteria Isolated from Fish and Fish Handlers in Maiduguri, Nigeria. Int J Anim Vet Adv. 2015; 7(3): 49–54. Reference Source\n\nSamanidou VF, Evaggelopoulou EN: Analytical strategies to determine antibiotic residues in fish. J Sep Sci. 2007; 30(16): 2549–2569. PubMed Abstract | Publisher Full Text\n\nUchida K, Konishi Y, Harada K, et al.: Monitoring of Antibiotic Residues in Aquatic Products in Urban and Rural Areas of Vietnam. J Agric Food Chem. 2016; 64(31): 6133–8. PubMed Abstract | Publisher Full Text\n\nHe X, Deng M, Wang Q, et al.: Residues and health risk assessment of quinolones and sulfonamides in cultured fish from Pearl River Delta, China. Aquacult. 2016; 458: 38–46. Publisher Full Text\n\nHardi EH, Kusuma IW, Suwinarti W, et al.: Antibacterial activities of some Borneo plant extracts against pathogenic bacteria of Aeromonas hydrophila and Pseudomonas sp. AACL Bioflux. 2016; 9(3): 638–646. Reference Source\n\nHardi EH, Kusuma IW, Suwinarti W, et al.: Short Communication: Antibacterial activity of Boesenbergia pandurata, Zingiber zerumbet and Solanum ferox extracts against Aeromonas hydrophila and Pseudomonas sp. Nusantara Bioscience. 2016; 8(1): 18–21. Publisher Full Text\n\nHardi EH, Saptiani G, Kusuma IW, et al.: Immunomodulatory and antibacterial effects of Boesenbergia pandurata, Solanum ferox, and Zingiber zerumbet on tilapia, Oreochromis niloticus. AACL/Bioflux. 2017; 10(2): 182–190. Reference Source\n\nHyttel P, Sinowatz F, Vejlsted M: Essentials of Domestic Animal Embryology. Saunders/Elsevier. 2010. Reference Source\n\nHyttel P, Sinowatz F, Vejlsted M, et al.: Essentials of domestic animal embryology. Elsevier Health Sciences UK. 2009. Reference Source\n\nMeena K, Raja TK: Immobilization of Saccharomyces cerevisiae cells by gel entrapment using various metal alginates. World J Microbiol Biotechnol. 2006; 22(6): 651–653. Publisher Full Text\n\nVučurović VM, Razmovski RN: Sugar beet pulp as support for Saccharomyces cerivisiae immobilization in bioethanol production. Ind Crops Prod. 2012; 39: 128–134. Publisher Full Text\n\nNugroho RA, Manurung H, Nur FM, et al.: Terminalia catappa L. extract improves survival, hematological profile and resistance to Aeromonas hydrophila in Betta sp. Arch Pol Fisheries. 2017; 25(2): 103–115. Publisher Full Text\n\nGoh CS, Tan KT, Lee KT, et al.: Bio-ethanol from lignocellulose: Status, perspectives and challenges in Malaysia. Bioresour Technol. 2010; 101(13): 4834–41. PubMed Abstract | Publisher Full Text\n\nÖztop HN, Öztop AY, Işikver Y, et al.: Immobilization of Saccharomyces cerevisiae on to radiation crosslinked HEMA/AAm hydrogels for production of ethyl alcohol. Process Biochem. 2002; 37(6): 651–657. Publisher Full Text\n\nNugroho RA, Manurung H, Saraswati D, et al.: The effects of Terminalia catappa leaf extract on the haematological profile of ornamental fish Betta splendens. Biosaintifika: Journal of Biology and Biology Education. 2016; 8(2): 241–248.\n\nPourmorad F, Hosseinimehr SJ, Shahabimajd N: Antioxidant activity, phenol and flavonoid contents of some selected Iranian medicinal plants. Afr J Biotechnol. 2006; 5(11): 1142–1145. Reference Source\n\nReddy PS, John MS, Devi PV, et al.: Detection of vancomycin susceptibility among clinical isolates of MRSA by using minimum inhibitory concentration method. Int J Res Med Sci. 2015; 3(6): 1378–1382. Publisher Full Text\n\nXie Y, Yang W, Tang F, et al.: Antibacterial activities of flavonoids: structure-activity relationship and mechanism. Curr Med Chem. 2015; 22(1): 132–49. PubMed Abstract | Publisher Full Text\n\nFu L, Lu W, Zhou X: Phenolic Compounds and In Vitro Antibacterial and Antioxidant Activities of Three Tropic Fruits: Persimmon, Guava, and Sweetsop. Biomed Res Int. 2016; 2016: 4287461. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAllyn OQ, Kusumawati E, Nugroho RA: Dataset 1 in: Antimicrobial activity of Terminalia catappa brown leaf extracts against Staphylococcus aureus ATCC 25923 and Pseudomonas aeruginosa ATCC 27853. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15998.d215169"
}
|
[
{
"id": "37963",
"date": "20 Sep 2018",
"name": "Edwin Setiawan",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study is designed appropriately and and technical procedure has been taken sufficiently to answer the research question. Furthermore, methods and analyses process also sufficient to be replicated by readers. In addition, statistical method that used is also appropriate. They author need to highlighted and emphasized on the results and discussion with expanding their discussion with comparing another result that similar to this study.\nIn the result and discussion part, it is better to compare phenolic and flavonoid content of Terminalia catappa to other plant extract that close to Terminalia catappa if possible. Or in other words, comparing other similar research that used plant bioactive compound for antibacterial to research. Therefore, this result and discussion part could be expanded comprehensively.\nIn addition, simple and short explanation on how the mechanism flavonoid and phenolic inhibit bacterial growth can be added in this part.",
"responses": []
},
{
"id": "37961",
"date": "23 Oct 2018",
"name": "Md Fakruddin",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the title, it should be mention ethanol extract, as the authors only evaluate ethanol extract of the leaves. In the figure 1, the quality is poor, so it seems the positive control do not have any inhibition zone! Only one strain of each bacteria was tested, it would be more authentic if more types of strains as well as some wild or environmental strains were tested. MIC/MBC of the extract against these bacteria should be determined. Data of this study should be compared with similar data published already. It is understood, that there may be no data of this particular species, but other species of terminalia can be consulted.",
"responses": []
},
{
"id": "37962",
"date": "23 Oct 2018",
"name": "Cimi Ilmiawati",
"expertise": [
"Reviewer Expertise Pharmacology",
"toxicology",
"molecular endocrinology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nComments:\n\nThis study contributes information on the potential of T. catappa leaves extract as an antibacterial alternative to address antibacterial resistance problems in fish farming.\n\nMethods Site and time: Please specify the affiliation of the laboratory (whether it belongs to a department in a university or else).\n\nPlant materials Please correct the writing of Mulawarman University.\nHow to preserve the crushed leaves? How long the preservation until the extraction?\n\nPhytochemical contents What is the purpose of qualitative and quantitave phytochemical analysis in relation to the study objectives? Please elaborate in the introduction part.\n\nResults\nThe inhibition zone in Figure 1 for the positive control is not very clear, therefore a zoom in figure is required to judge the result. I was expecting to see three discs in each plate of treatment and this experiment shall be replicated three times, before analyzing the data and arriving into a conclusion. Please comment on your replication method.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1406
|
https://f1000research.com/articles/7-1390/v1
|
03 Sep 18
|
{
"type": "Software Tool Article",
"title": "Providing gene-to-variant and variant-to-gene database identifier mappings to use with BridgeDb mapping services.",
"authors": [
"Friederike Ehrhart",
"Jonathan Melius",
"Elisa Cirillo",
"Martina Kutmon",
"Egon L. Willighagen",
"Susan L. Coort",
"Leopold M.G. Curfs",
"Chris T. Evelo",
"Jonathan Melius",
"Elisa Cirillo",
"Martina Kutmon",
"Egon L. Willighagen",
"Susan L. Coort",
"Leopold M.G. Curfs",
"Chris T. Evelo"
],
"abstract": "Database identifier mapping services are important to make database information interoperable. BridgeDb offers such a service. Available mapping for BridgeDb link 1. genes and gene products identifiers, 2. metabolite identifiers and InChI structure description, and 3. identifiers for biochemical reactions and interactions between multiple resources that use such IDs while the mappings are obtained from multiple sources. In this study we created BridgeDb mapping databases for selections of genes-to-variants (and variants-to-genes) based on the variants described in Ensembl. Moreover, we demonstrated the use of these mappings in different software tools like R, PathVisio, Cytoscape and a local installation using Docker. The variant mapping databases are now described on the BridgeDb website and are available from the BridgeDb mapping database repository and updated according to the regular BridgeDb mapping update schedule.",
"keywords": [
"database identifier mapping",
"gene variant",
"BridgeDb",
"interoperability"
],
"content": "Introduction\n\nMany bioinformatics software tools rely on database identifier mapping, for instance for 1) recognition and mapping of identifiers used in experimental data to the corresponding identifiers present in secondary sources like pathways or ontology classes or 2) simply to combine data from different sources that use different identifiers. BridgeDb is a database identifier mapping tool that is available as a Java framework and as an installable web service (van Iersel et al., 2010). Tools that integrate BridgeDb are for instance: the community curated pathway resource WikiPathways (Slenter et al., 2018), the modular pathway editor and pathway analysis tool PathVisio (Kutmon et al., 2015), and the network tool Cytoscape used to visualize, extend and evaluate biological networks. Depending on the available mappings BridgeDb can provide the mapping between identifiers from various data sources, also when these link to different molecular levels, e.g. gene to protein. BridgeDb can also be deployed as a web service. Moreover, it is available in a semantic web version, the Identifier Mapping Service (IMS), which can be used inside the Open PHACTS platform but can also be deployed from a software container (Gray et al., 2014) (link to tutorial and link to GitHub). Mappings for BridgeDb are already available for gene products for many species (produced from the respective Ensembl genome annotations (Aken et al., 2017)), for metabolite identifiers (produced from HMDB (Wishart et al., 2013)) and ChEBI (Hastings et al., 2013)), and for reaction identifiers (produced from Rhea (Morgat et al., 2017)).\n\nThe BridgeDb mapping databases are linking pins between tools that support genetic variants, genes, and pathways analysis helping to visualize a complex biological context such those typical of the multifaceted (genetic) diseases. Gene-to-variant mapping was not yet available for BridgeDb. Such mappings can be especially useful to work with genetic variations, for instance when evaluating traits with a complicated genetic background like blood pressure, susceptibility to heart failure, or diabetes type 2 development. Single nucleotide polymorphism (SNP) can be responsible for phenotypic variations. In extreme cases this can be the cause of rare genetic disorders. For example, several SNPs in the human DMD gene can cause Duchenne muscular dystrophy (DMD), a severe congenital disorder which leads to severe physical impairment (Magri et al., 2011). Since BridgeDb can stack mappings, the combination of the new gene-to-variant mapping database with the collection that was already available offers versatile mappings for variants to a large set of different human gene and gene product identifiers.\n\nThe main objective of this work was to provide mappings between gene identifiers and variant identifiers in both directions. The steps needed to achieve this were: 1) select the best source for the mappings, 2) collect data from the selected source, 3) annotate the result with provenance data about the process, the source, and the source version, and 4) finally to release the new BridgeDb mapping database and integrate that in the regular BridgeDb mapping database update schedule.\n\nTarget users for the resulting mappings are 1) bioinformaticians and developers, working on new approaches for data integration, if these use human genetic (variant) information; 2) members and users of ELIXIR data interoperability services, including the implementations in the tools mentioned that perform analyses based on human genetic variant data, for instance for the analysis of common multifaceted genetic diseases or in the rare disease field; and 3) researchers who access and query molecular data resulting from the analysis above.\n\n\nMethods\n\nThe gene-to-variant database uses mappings between Ensembl and dbSNP (Kitts et al., 2013). The Ensembl gene-to-dbSNP variant mappings present in Ensembl were used as the source. The released database is based on Ensembl r91, dbSNP b150, and the human genome assembly GRCh38. Although Ensembl provides more genetic variation from different sources, we focused on dbSNP as this variation database is regularly updated and adjusted to the actual Ensembl genome built. We compared both sources (Ensembl and dbSNP) and made sure that Ensembl provides all dbSNP available variants. So, we are able to rely only on the Ensembl API as a source for the extraction of the data necessary for creation of this mapping database. To prevent problems introduced by the user interfaces we used database dumps for this comparison.\n\nThe data dump was obtained from the Ensembl ftp server (link for download). For the first version, we used Ensembl 91, gene annotation with Gencode 27. The vcf (variant call format) file is the one relevant for our mapping. It contains the dbSNP identifier with its additional attributes and the associated Ensembl transcript identifier. By querying the Ensembl platform web service, we can access the gene identifier of the transcript. Combined, this leads to mappings between variants and genes. The size of the complete mapping database exceeded 150 Gb (for Ensembl 91), so we decided to create several different subsets: exonic variants, missense variants, protein truncating variants (PTV), PTV and missense variants, and variants with a PolyPhen score >0.908 indicating “Probably Damaging”. Other selections can be created easily on individual demand.\n\nThe created database contains the link between the Ensembl gene identifiers and the dbSNP variant identifiers including a selection of attributes (MAF (minor allele frequency), chromosome, variant alleles, and chromosome position start/end).\n\nFor the rare cases where a variation is associated to more than one gene, the variant is also associated to these genes in the BridgeDb database. For example, rs199773918 overlaps in the exons of two genes (ENSG00000173366 and ENSG00000239732), and in the exonic variant BridgeDb mapping both genes show up. Nevertheless, in our selection of variants it may happen that not all of them show up due to different variant effect classifications in the different genes. As an example, rs199773918 is a variant that overlaps in the following genes: TPR (ENSG00000047410) and PRG4 (ENSG00000116690). This variant is a “3’ prime UTR variant” of TPR and a “missense variant” of PRG4. It can be found in both genes variant tables but due to our selection it will show up only once in the missense variant dataset.\n\nDatabase creation. An open-source Java program to create the gene-to-variant database is available on GitHub. After downloading the vcf file from Ensembl, users create a configuration file with several parameters. Then the database creation program will parse the vcf file, retrieve additional information through the Ensembl web service and create the BridgeDb mapping database. Due to the large amount of mappings, the tool commits the mappings to the database in batches to keep the required memory low.\n\nOperation. The database creation workflow is depicted in Figure 1. The vcf file can be downloaded from the Ensembl FTP. The “Homo_sapiens_incl_consequences.vcf.gz” file is used.\n\nThe gene-variant mapping database is built on the variant call format (vcf) file provided by Ensembl. After running the database creation tool, the database can be used in all the different use cases.\n\nSystem requirements. The database creation tool runs with Java and requires more memory than usually given to a Java process. We advise users to allocate 3–4GB of memory at least when running the database creation tool (-Xmx4G).\n\n\nResults\n\nThe resulting BridgeDb mapping databases are available as a Derby database from here. The new mappings are available for all the BridgeDb implementations mentioned above (PathVisio, Cytoscape, R package, web service, and the IMS). The mapping databases are freely available for download under CC-BY license. Application examples of the use of the variant BridgeDb database are given in the following section. We created gene-to-variant mapping databases for the variant classes given in Table 1. Any other subset of variant classes can be created on demand using the tool described in the Methods section.\n\n\nUse cases\n\nTo test and demonstrate the application of the variant BridgeDb database, we downloaded the database from BridgeDb. The gene-to-variant (and variant-to-gene) queries are shown in four different tools: R command line (Team, 2014), PathVisio (Kutmon et al., 2015), Cytoscape (Shannon et al., 2003) and the local IMS installation using Docker, in order to provide an overview of the flexibility of the mapping database in different environments. A genetic variant of the rare disease Duchenne muscular dystrophy (DMD) was selected from the gene-disease association database DisGeNET (Piñero et al., 2017). The rs104894790 (Lenk et al., 1993) SNP was chosen because it presented a high number of citations and a stop gain damaging effect on the gene’s protein product.\n\nThe SNP, rs104894790, as described above was used to query the Ensembl identifier for the gene(s) in which it is located (variant-to-gene query). The query was performed in R command line, after the installation of the BridgeDb R package (link to BridgeDb R package) (example R script in Supplementary File 1) (R version 3.5.1). The result shows that the variant is positioned only in one gene: dystrophin (DMD, ENSG00000198947). DMD is one of the largest genes in the human DNA (about 2.2 Mb), and is composed of 79 exons and has 32 known transcripts of which 20 are protein coding. Because the output is identifiers, it can be easily linked to other R packages such as mygene (Mark et al., 2014) which normally wraps around the mygene.info web service (Xin et al., 2016).\n\nWe used PathVisio (version 3.3.0) (Figure 2), a biological pathway analysis tool that allows drawing, editing and analyzing biological pathways, to demonstrate how the new gene-variant database can be used to evaluate variants in a pathway context. PathVisio, like Cytoscape, has the BridgeDb functionality integrated in the core. For the purpose of the demonstration, we first selected pathways that contain the DMD gene from the R example. Five pathways were found: two striated muscle contraction pathways (WP3795 and WP383), Ectoderm differentiation (WP2858), Extracellular matrix organization (WP2703) and Arrhythmogenic right ventricular cardiomyopathy (WP2118). In principle, a new PathVisio plugin could now be developed that searches pathways that contain genes with selected variants automatically, or the plugin could show all variants from an analysis sets on a given pathway. For the example, one of the striated muscle contraction pathways (WP383) was selected and visualized. Next, the BridgeDb variant database was loaded, using the BridgeDbConfig plugin. After selecting a gene in the pathway, the backpage tab of the right hand side panel now shows the list of hyperlinks obtained from the BridgeDb database that point to different information sources linked to the gene selected. Figure 2 shows the backpage with the list of the 720 SNPs (from the BridgeDb with a PolyPhen score > 0.908, file SNP_r91_PolyPhen.bridge) for the selected DMD gene. All the SNPs in the backpage have a hyperlink to the corresponding dbSNP page.\n\nWhen the DMD gene is selected a list of hyperlinks from different sources are displayed in the back page of the left panel. In this case, a list of SNPs located in the gene is visualized.\n\nAn alternative gene-to-variant visualization is provided using Cytoscape (version 3.6.1), a popular tool for (biological) network analysis and visualization (Figure 3). The BridgeDb app for Cytoscape is available here. A node with the Ensembl gene identifier of DMD was created and the 720 SNPs were mapped to the gene using the BridgeDb app interface. A gene-variant network was created using the list of variants mapped. Moreover, the app can be used to configure the selection of several attribute columns related to the variant nodes such as: chromosome location, minor allele frequency, and variant allele. In this example figure, we visualize the PolyPhen score as the node fill color of the variants. For simplicity, the rs-numbers are not displayed.\n\nUsing the BridgeDb app for Cytoscape, a gene-variant network for the DMD gene (blue rounded rectangle) and its 720 probably damaging variants (PolyPhen score > 0.908) was created. The node color of the variants represents the PolyPhen score as a gradient (white-red), the darker the red, the higher the PolyPhen score.\n\nFinally, we here show that identifier mapping linking variants to genes and vice versa can also be done at a semantic web level, we here demonstrate how an online BridgeDb Identifier Mapping Service (IMS) can be set up. The IMS technology was developed in the Open PHACTS project to link drug discovery related data sets, including a Docker image (Batchelor et al., 2014; van Iersel et al., 2010; Williams et al., 2012). Here, identifier mappings are defined by link sets, which specify which identifiers are mapped. However, unlike traditional BridgeDb mapping files, these link sets also specify why the two identifiers are mapped, allowing them to be used as scientific lenses (Batchelor et al., 2014).\n\nBecause the IMS works at a semantic web level, identifiers are represented by uniform resource identifiers (URIs). Moreover, the IMS is aware of URI equivalence defined by the MIRIAM registry (Juty et al., 2012). This means that even when a mapping file does not provide mappings for a certain URI, one would still get a number of equivalent URIs, following knowledge from MIRIAM database. And, when a single mapping is found in the link sets, equivalent URIs for the mapped URIs it returned. The IMS provide a targetUriPattern parameter allowing you to restrict the number of mapped URIs.\n\nWe developed a tutorial explaining how to set up an IMS instance with the variant-gene mappings (available from GitHub). The instance is run locally using a Docker container developed by Open PHACTS, which is available from DockerHub. After the Docker image is started, it provides a web interface and an API. The web interface has a \"Check Mapping for an URI\" page where the URI can be given to be mapped, the return format (XML, JSON, or HTML), and optionally a lensUri (see (Batchelor et al., 2014), and the aforementioned targetUriPattern).\n\nHowever, it is more convenient to use this API from other tools, as demonstrated with a second R script (Supplementary File 1). This R script uses the curl (Ooms, 2017 link) and jsonlite (Ooms, 2014) packages to interact with the IMS. The first package is used to call the IMS webservice and the second to convert the returned JSON into a data model more easily handled in R. The example consists of two API calls: the first part finds 603 variants for the DMD gene (Ensembl ID ENSG00000198947); the second example takes a single variant (dbSNP ID rs769658853) and looks up the matching gene.\n\n\nDiscussion\n\nThe BridgeDb toolset provides several apps and tools designed for different purposes, while mapping databases are available to link different database IDs for genes and gene products, metabolites, and reactions and interactions. A mapping database in the BridgeDb software environment, capable of linking genes to their variants and vice versa, was not yet available. The new database is expected to be useful to enhance the biological interpretation of genetic variant data (as shown with the example of the DMD gene) for instance when using apps that evaluate biological pathways, use the classification of genes according to ontology terms, or in the R environment when performing gene and variant related statistical evaluation.\n\nWith this newly created mapping database and the transitivity function of BridgeDb, the user can map between three different layers: e.g. variant-gene-protein. This approach can support multi-omics analysis for various biomedical applications, and tools like Cytoscape and PathVisio can be used immediately to benefit from this.\n\nWe intend to keep the content up-to-date by regular updates. The human variant mapping database is already incorporated into the quarterly BridgeDb mapping database update. Also other variant sets including more than only the currently included protein truncating and missense variants can be created on user community (or individual) demand.\n\n\nData availability\n\nThe new gene-to-variant mapping databases are available here: http://bridgedb.org/data/gene_database/\n\nAvailable under a Apache 2.0 licence (http://www.apache.org/licenses/LICENSE-2.0.html)\n\n\nSoftware availability\n\nSource code for making of the mapping databases is available from GitHub: https://github.com/BiGCAT-UM/BridgeDbVariantDatabase\n\nArchived source code at time of publication: http://doi.org/10.5281/zenodo.1326514 (Willighagen & Melius, 2018)\n\nLicense: Apache 2.0",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by ELIXIR, the European research infrastructure for life-science data.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank the BridgeDb development team. This work heavily leaned on previous work done by the dbNP and Ensembl teams who curated the actual mappings and on the original BridgeDb development team, especially Martijn van Iersel.\n\n\nSupplementary material\n\nSupplementary File 1 – R code and instructions for setting up the BridgeDb IMS docker.\n\nClick here to access the data.\n\n\nReferences\n\nAken BL, Achuthan P, Akanni W, et al.: Ensembl 2017. Nucleic Acids Res. 2017; 45(D1): D635–D642. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBatchelor C, Brenninkmeijer CYA, Chichester C, et al.: Scientific Lenses to Support Multiple Views over Linked Chemistry Data. International Semantic Web Conference. 2014; 2014: 98–113. Publisher Full Text\n\nGray AJ, Groth P, Loizou A, et al.: Applying linked data approaches to pharmacology: Architectural decisions and implementation. Semant Web. 2014; 5(2): 101–113. Publisher Full Text\n\nHastings J, de Matos P, Dekker A, et al.: The ChEBI reference database and ontology for biologically relevant chemistry: enhancements for 2013. Nucleic Acids Res. 2013; 41(Database issue): D456–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJuty N, Le Novère N, Laibe C: Identifiers.org and MIRIAM Registry: community resources to provide persistent identification. Nucleic Acids Res. 2012; 40(Database issue): D580–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKitts A, Phan L, Ward M, et al.: The NCBI Handbook - The Database of Short Genetic Variation (dbSNP). 2 ed. Bethesda (MD): National Center for Biotechnology Information (US). 2013. Reference Source\n\nKutmon M, van Iersel MP, Bohler A, et al.: PathVisio 3: an extendable pathway analysis toolbox. PLoS Comput Biol. 2015; 11(2): e1004085. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLenk U, Hanke R, Kraft U, et al.: Non-isotopic analysis of single strand conformation polymorphism (SSCP) in the exon 13 region of the human dystrophin gene. J Med Genet. 1993; 30(11): 951–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMagri F, Govoni A, D'Angelo MG, et al.: Genotype and phenotype characterization in a large dystrophinopathic cohort with extended follow-up. J Neurol. 2011; 258(9): 1610–23. PubMed Abstract | Publisher Full Text\n\nMark A, Thompson R, Afrasiabi C, et al.: Access MyGene.Info_ services. Bioconductor. 2014. Publisher Full Text\n\nMorgat A, Lombardot T, Axelsen KB, et al.: Updates in Rhea - an expert curated resource of biochemical reactions. Nucleic Acids Res. 2017; 45(D1): D415–D418. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOoms J: The jsonlite Package: A Practical and Consistent Mapping Between JSON Data and R Objects. arXiv. 2014. Reference Source\n\nPiñero J, Bravo À, Queralt-Rosinach N, et al.: DisGeNET: a comprehensive platform integrating information on human disease-associated genes and variants. Nucleic Acids Res. 2017; 45(D1): D833–D839. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShannon P, Markiel A, Ozier O, et al.: Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003; 13(11): 2498–504. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSlenter DN, Kutmon M, Hanspers K, et al.: WikiPathways: a multifaceted pathway database bridging metabolomics to other omics research. Nucleic Acids Res. 2018; 46(D1): D661–D667. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeam RC: R: A language and environment for statistical computing. R Foundation for Statistical Computing. 2014.\n\nvan Iersel MP, Pico AR, Kelder T, et al.: The BridgeDb framework: standardized access to gene, protein and metabolite identifier mapping services. BMC Bioinformatics. 2010; 11: 5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilliams AJ, Harland L, Groth P, et al.: Open PHACTS: semantic interoperability for drug discovery. Drug Discov Today. 2012; 17(21–22): 1188–98. PubMed Abstract | Publisher Full Text\n\nWillighagen E, Melius J: BiGCAT-UM/BridgeDbVariantDatabase: Gene-Variant database builder. Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1326514\n\nWishart DS, Jewison T, Guo AC, et al.: HMDB 3.0--The Human Metabolome Database in 2013. Nucleic Acids Res. 2013; 41(Database issue): D801–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXin J, Mark A, Afrasiabi C, et al.: High-performance web services for querying gene and variant annotation. Genome Biol. 2016; 17(1): 91. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "39323",
"date": "22 Oct 2018",
"name": "Patrice Godard",
"expertise": [
"Reviewer Expertise bioinformatics",
"genomics",
"genetics",
"R"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes the use of the BridgeDb framework and associated tools to map gene and gene variant identifiers. The authors produced and made available 5 ready to use mapping databases focused on different categories of human gene variants extracted from the Ensembl database (version 91). They also document in the article the use of these databases in 4 different environments. As stated by the authors BridgeDb is integrated or used by different tools or resources such as wikipathway and pathvision. In this context, it makes sense to enrich the BridgeDB ecosystem with additional mapping databases such as those produced by the authors and focused on gene/gene variant associations, and it makes sense to publish an article describing the new available features.\nHowever there are fundamental flaws in the article that seriously undermine the findings and conclusions:\nThe different databases which are provided are not described sufficiently. The point here is about informing the user about the level of exhaustivity (relative to the original data source) he can expect when performing a mapping between genes and gene variants.\nWhat is the number of SNPs and genes covered by each of them? What were the formal criteria used to include a SNP in one or the other database? (They can be found in the config file but they are not described and the list of other possible options is not given) The authors do not report if and how they check the content of the databases they produced according to the original data source (Ensembl 91). What is the overlap between the different bridge databases? The selection of the database for mapping gene identifiers to gene variants can be done according to variant criteria under focus But how to select the relevant database to achieve the opposite task: finding a gene associated to a gene variant? This concern is exemplified by the IMS use case provided by the authors: the number of variants found associated to the ENSG00000198947 gene using the IMS method (603) is smaller than the number of variants found with the methods relying on the PolyPhen bridge database (720). An greater, or at least an equal, number of SNPs was expected since no specific database is specified in the IMS query. Does it rely on the same information than available in bridge databases?\n\nThe authors claim that the Bridge databases they provide contain a selection of attributes that can be retrieved. I could only test this feature using the cytoscape BridgeDb app, trying to reproduce the Figure 3. I could install the SNP_r91_PolyPhen.bridge database and use it to retrieve 720 variants associated to the ENSG00000198947 gene. Then, I tried to get all the attributes for the 720 SNPs.\nThe query took more than 3 hours to run (Processor: Intel i5-6300U 2.40Ghz; RAM: 8GB; OS: Windows 10 Enterprise 64-bit; Cytoscape 3.6.1; BridgeDb app 1.1.0.2). Such long runtime should be mentioned in the article. The Polyphen score was not available among the listed attributes which prevented me to reproduce the figure 3. Also, the Polyphen scores are reported by transcript in Ensembl and the authors do not document how it is recorded by gene in the Bridge database: do they take the average, the maximum or the minimum score? Moreover I got only empty arrays for the MAF attribute (no value). Only one allele was returned as Variant Alleles attribute for each SNP.\n\nIn addition to these flaws, other major issues need to be addressed:\nIntroduction\nThe authors do not cite services/tools already available for finding SNP/gene cross-references. Among possible candidates: Ensembl BioMart (https://www.ensembl.org/biomart) and MyVariant (https://myvariant.info/). The authors should explain in the introduction why they develop a new resource and describe in the discussion the advantage or the differentiating features of their solution. The authors list 3 categories of users but they do not describe their needs and how those needs would be fulfilled by their solution (this could be part of the discussion).\n\nMethod\nIn the introduction the authors mention the selection of the best source for mappings as the first step to build the bridge database. However, they selected the dbSNP information provided by Ensembl without explaining why they made this choice. What were the criteria used to define this resource as the best one? Why not use the files provided by dbSNP directly? As mentioned above, the author claim the following attributes are available for the SNPs: MAF, chromosome and chromosome potion and variant alleles. Beside the flaws identified above, these attributes should be described more precisely: In which population the MAF has been measured? What are the ancestral and minor alleles of each SNP? What is the genome version used for chromosome position.\n\nUse cases\nI could download and use the databases to reproduce the code provided by the authors. However, the SNP provided as an example in the script is not the one described in the article (rs5927022 in the script and rs104894790). It is annoying since the SNP provided in the script could not be found in any of the 5 bridge databases (it’s an intronic SNP actually: http://www.ensembl.org/Homo_sapiens/Variation/Explore?db=core;r=3:52224867-52225867;source=dbSNP;v=rs5927022;vdb=variation;vf=15937754). Also it was not easy to find in which database the variant described in the article was available. I’ve tried all of them and could find this variant in the “SNP_r91_Exon.bridge” and “SNP_r91_PTV.bridge” databases. The script should be in accordance with the article. The authors should also provide a strategy to identify the relevant database to map a SNP or several SNPs to a gene (as mentioned above). Is it possible to get SNP attributes from the R interface as it is from cytoscape?\n\nFinally minor issues could also be addressed to improve the quality of the article\nMethod\nThe authors write that they are able to rely on Ensembl API. But they’ve used files downloaded from Ensembl site and not the API. This sentence should be modified accordingly. The author mention problems introduced by Ensembl user interfaces. What are these problems?\n\nImplementation\nHow long does it take to create each bridge database? Why not creating a complete database with more attribute for variant annotation? The vcf file mentioned in the article is not available anymore. It has been split by chromosome since the Ensembl release 93. Is the database creation workflow compatible with this new organization of the original files? The figure 1 is not very informative. It does not describe the database creation workflow which is only a box in this figure. I think it would be more informative to focus on this box and to explain what are the different steps in this box. Indeed, according to the information found on github, it seems that there are 2 java programs “VariantReader” and “VariantCreator” which are called sequentially in order to produce the database.\n\nResults\nThe dates in table 1 are misleading. They probably refer to the date of the database creation in June and July 2018. However the Ensembl version used as data source is from December 2017/April 2018. The authors should clarify this point in the table legend.\n\nUse cases\nIt would be very useful to add the attributes of the SNPs in the PathVisio backpage in addition to the hyperlinks. Being able to access SNP information from cytoscape is a nice feature. However I don’t think that the use case provided by the author is very relevant. Indeed I don’t know how a network with 1 gene linked to 720 variants can be used or interpreted as such (In this case a table with all the variants related to the gene and their attributes should be sufficient). Maybe, an example of a network with more gene would be more interesting. The link to the cytoscape app is missing. The 3 first paragraphs of the “BridgeDb identifier Mapping Service (IMS)” should go in Methods.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? No\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? No",
"responses": []
},
{
"id": "39322",
"date": "01 Nov 2018",
"name": "Osman Ugur Sezerman",
"expertise": [
"Reviewer Expertise bioinformatics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nDatabase identifier mapping services are necessary to make the information interoperable to be able to link to other resources. In the present work tbridgeDB added a new feature enabling mapping databases for genes-to variants and vice versa for the variants described in Ensemble.\nImplementation stages are explained in detail making the work reproducible. The use case scenario clearly demonstrates the value added from the service.The work is certainly scientifically sound.\nI have two points to be addressed:\n\nHow do they handle the variants in case of splice variants of the same gene are present It is difficult to query and search the available gene to variant mapping databases\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1390
|
https://f1000research.com/articles/7-1389/v1
|
03 Sep 18
|
{
"type": "Research Article",
"title": "Inadequate survivorship care after allogeneic hematopoietic cell transplantation: A retrospective chart review",
"authors": [
"Sunn Sunn Thaw",
"Shernan Holtan",
"Qing Cao",
"Michael Franklin",
"Nyan Paye",
"Anne Blaes",
"Sunn Sunn Thaw",
"Shernan Holtan",
"Qing Cao",
"Michael Franklin",
"Nyan Paye"
],
"abstract": "Background: Hematopoietic cell transplant (HCT) survivors are at risk of developing long-term complications. Guidelines for survivorship care of HCT recipients were published in 2012; however, the degree to which these guidelines are incorporated into clinical practice is unknown. The purpose of this study was to determine whether providers utilize the 2012 guidelines and analyze whether survivorship-focused providers, provider gender, or provider year of practice influenced adherence to these guidelines. Methods: Adult allogeneic HCT recipient’s medical records were reviewed at the University of Minnesota between 2010 and 2012; only patients who survived without relapse to their 2-year follow-up visit after HCT were included. A semi-quantitative scoring system was developed providing 1 point for each of the 13 organ systems assessed by the 2012 survivorship care guidelines. Data was collected on history, clinical exam, laboratory tests, preventive measures, and counseling. The primary endpoint was the overall score for adherence to the survivorship care guidelines. Wilcoxon rank-sum tests for continuous and Chi-square tests for categorical factors were used to compare the overall score between provider groups (survivorship-focused providers vs others), provider gender, and provider year of practice (≥10 years vs <10 years). Results: Fifteen providers (9 male, 3 survivorship-focused, 7 with <10 years of practice) provided follow-up care to 77 HCT survivors. Survivorship-focused providers had a higher median overall score than other providers (median 10 vs 8, p<0.01). Female providers had a higher median overall score than male providers (median 9.0 vs 8, p<0.01). There was no difference in median overall score based on provider year of experience (p=0.43). Conclusions: In conclusion, survivorship-focused providers were more likely to achieve long-term screening recommendations. However, even within this group, adherence to the 2012 screening and preventive practice guidelines was incomplete. Further efforts to automate and standardize the survivorship assessments in HCT survivors are necessary.",
"keywords": [
"survivorship",
"hematopoietic stem cell transplantation",
"bone marrow transplant",
"cancer survivorship"
],
"content": "Introduction\n\nPatients who undergo allogeneic hematopoietic cell transplantation (HCT) are typically exposed to chemotherapy and radiation therapy as part of their cancer treatment and transplant conditioning regimens. Two-thirds of HCT survivors have at least one chronic medical illness, and one-fifth of survivors have at least one severe health condition2. Their life expectancy is estimated to be 30% lower than the general population because of excess risk of death from second malignancy, cancer relapse, infection, chronic graft-versus-host disease (GVHD), and pulmonary and cardiovascular diseases3. Follow-up care after HCT is essential to screen for long-term complications and institute preventive therapies. To standardize management of complications after HCT, a collaborative group led by the Center for International Blood and Marrow Transplant Research (CIBMTR) published guidelines for survivorship screening and preventive practices for long-term survivors after HCT4. The guidelines provide specific monitoring recommendations categorized into 13 organ systems commonly affected in transplant survivors. Although the guidelines were published in 2012, adherence to these guidelines among transplantation oncologists and other clinical providers has not been evaluated.\n\nTo determine whether providers utilize HCT survivorship care guidelines, we conducted a retrospective, single-institution case series of provider adherence to the guidelines at the 2-year follow-up visit after HCT. In addition, we analyzed whether survivorship-focused care, provider gender, or provider years of practice influenced adherence to the recommended guidelines for managing HCT survivors.\n\n\nMethods\n\nA retrospective chart review was performed to identify adult survivors after allogeneic HCT who attended their 2-year post-transplant visit at the University of Minnesota Medical Center between 2012 and 2014. Inclusion criteria included age ≥18 years and receipt of allogeneic HCT from a matched related, matched unrelated, or umbilical cord blood donor between 2010–2012. Exclusion criteria included disease relapse before the 2-year visit or lack of a follow-up visit at our center. Data was collected on history, clinical exam, laboratory tests, preventive measures, counseling, and provider. This study was approved by the Institutional Review Board of University of Minnesota Medical Center (1412M58923).\n\nFor semi-quantitative assessment of adherence to the HCT survivorship care guidelines, a scoring system was developed that provided 1 point for each quality measurement implemented (Table 1). Scores were determined for 13 individual organ systems: immunization status, ocular system, oral system, respiratory system, cardiovascular system, hepatic system, renal system, musculoskeletal system, central nervous system, endocrine system, mucocutaneous system, second cancer screening, and psychological system. Individual system scores were added to determine an overall score for follow-up care. There were 41 questions under the 13 categories. Each individual organ system score was determined by adding one point for each question addressed. The questionnaire used is available as Supplementary File 1. The overall score assessment was determined by adding one point for each organ system if at least one question in the category for that organ system was addressed. Points were only assigned if there was documentation in the medical chart that a question was addressed; missing data were analyzed as not being addressed.\n\nThe primary endpoint was the overall score for adherence to the HCT survivorship care guidelines. Secondary endpoints involved analyzing the association between guideline adherence and potential modifiers of follow-up care: provider gender, provider year of experience, and self-reported specialization in survivorship care. Providers were transplantation oncologists who cared for the patients during their 2-year follow-up visit. The time allotted on 2-year visit for providers was 30 minutes. A survivorship-focused provider was a self-identified individual conducting education and research on survivorship.\n\nThe Wilcoxon rank sum test was used to compare the overall score between provider groups (survivorship-focused providers vs others), provider gender (male vs female), and provider year of practice (≥10 years vs <10 years). For the 13 individual system scores, statistical comparisons between factors were completed using Wilcoxon rank-sum test for continuous and Chi-square test for categorical factors. All analyses were performed using SAS 9.4 (Institute Inc., Cary, NC, USA).\n\n\nResults\n\nOf 111 adult patients surviving to 2 years after HCT, 34 patients were excluded due to disease relapse before the 2-year visit or lack of a follow-up visit at our center. 77 patients who completed a 2-year visit at the University of Minnesota Medical Center were included (Table 2). 15 providers delivered care for these patients. Of these, 6 (40%) were female providers, 3 (20%) were survivorship-focused providers, and 7 (47%) were providers with <10 years of experience.\n\nThe median overall score for adherence to survivorship care guidelines was 8.0 (6.0–13.0; Table 3). Notably, providers did not perform assessments in a substantial percentage of patient organ systems, including immune (36.4%), ocular (59.7%), musculoskeletal (62.3%), central nervous (39.0%), endocrine (27.3%), mucocutaneous (27.3%), and psychological (92.2%). In addition, a large majority of providers did not perform second cancer counseling and screening (84.4%). Responses to all 41 questions on the questionnaire are shown in Supplementary Tables S1 through S3. All patients received a perfect score on the renal system assessment, which included blood pressure screening and routine laboratory tests for blood urea nitrogen and creatinine. Laboratory test for hepatic function score was also completed in the entire cohort of patients.\n\nSurvivorship-focused providers and other providers had significant differences in completeness of screening assessment. Survivorship-focused providers had a higher median overall score than other providers (median 10 vs 8; p<0.01; Table 3). Survivorship-focused providers were more complete in assessment of the ocular system, oral system, cardiovascular system, musculoskeletal system, central nervous system, and mucocutaneous system (Table 3 and Supplementary Table S1). In addition, survivorship-focused providers were more likely to perform counseling on second cancer awareness and screening than other providers. However, most survivors had no documented assessment of their psychological and behavior health by either survivorship-focused providers or any other providers (85.7% vs 91.2%, p = 0.52).\n\nFemale providers had a higher median overall score than male providers (median 9.0 vs 8, p<0.01; Table 3). Female providers covered more organ systems than male providers with higher median scores in the ocular system, cardiovascular system, endocrine system, and second cancer screening and counseling (Table 3 and Supplementary Table S2).\n\nThere was no statistically significant difference in median overall score based on provider year of experience (Table 3; p=0.43); however, providers with <10 years of experience performed better in assessments of the immune system, while providers with ≥10 of experience performed better in assessments of the endocrine system (Table 3 and Supplementary Table S3).\n\n\nDiscussion\n\nHCT survivors are at risk of developing long-term complications. Guidelines for survivorship care of HCT recipients were published in 2012; however, the degree to which these guidelines are incorporated into clinical practice is unknown. In the present study, we demonstrate that adherence to the 2012 guidelines is sub-optimal and identified areas needing diligent assessment attention in the long-term care of HCT recipients. While all providers followed certain guidelines of survivorship care, such as liver function screening, renal screening, and blood pressure screening, there were several areas identified where extra training in survivorship care would enhance the implementation of screening measures. Our study showed that survivorship-focused providers performed more extensive screening assessments and counseling in 7 organ systems (ocular, oral, cardiovascular, musculoskeletal, central nervous system, mucocutaneous, and second cancer screening) as compared to other providers. Given the discrepancy in these areas, further oncology education in HCT survivorship care for all providers is needed to improve the health of HCT survivors.\n\nCaring for HCT survivors requires understanding their long-term morbidity and probability of high mortality. The challenge for clinicians in caring for HCT survivors is tremendous and a major responsibility of the transplant team. In our case series, there were several important areas where clinicians could improve their assessment of HCT survivors, including immunization assessment and completing immunizations on schedule, assessment of endocrine dysfunction, cardiovascular disease, cognitive function, sexual function, psychosocial and quality of life, inquiry of family functioning, substance abuse, counseling on physical activity, fall prevention, supplementation with calcium/vitamin D, and awareness of second malignancies. All of these areas were noted to be significantly deficient in many patients in our series. Having a provider with a background and interest in survivorship care improved the number of long-term health screenings an HCT recipient received; however, there is still room for improvement as less than a quarter of HCT survivors received a truly comprehensive clinical assessment during their 2-year visit.\n\nOver a decade ago, the Institute of Medicine proposed that all cancer survivors have a survivorship care plan (SCP), which should include a treatment summary and follow-up care plan5,6. The American Society of Clinical Oncology and other professional organizations put tremendous effort into developing various survivorship care models, such as the SCP, treatment summaries, and implementation strategies to attain improved quality care for all cancer survivors; however, the implementation of these SCPs has not been satisfactorily accomplished in oncology practices7–10. There are several practical issues: lack of time and staff, lack of coordination and communication among providers, and potentially reimbursement. In addition, evaluation of the benefits in terms of improved outcomes of survivorship care is ongoing11,12.\n\nThe post-transplant visit is allocated 30 minutes for each patient, with labs studies completed prior to the provider visit. Addressing complex medical issues and potential complications during 30 minutes is challenging for providers. A high-quality post-transplant visit can only be completed with expert time management and thorough preparation and coordination of care among providers. Using electronic methods or automation to ensure the thirteen identified areas are covered during the 2-year visit would ensure that all HCT patients receive the recommended care.\n\nOne proposed strategy to overcome the above-mentioned barriers is a risk-stratified shared-care model with delegation of roles and responsibilities between the oncologist and primary care provider. For example, subspecialists such as a dentist, an ophthalmologist, and a psychologist could be closely involved in coordination of long-term follow-up care13. It has been shown that the frequency of health promotion and health behavior discussions are suboptimal by clinicians because of the fact that approximately 25% of cancer survivors are not engaged in any such discussions14. Cancer survivors are likely to receive appropriate interventions for their comorbid conditions if they routinely follow up with their primary care providers15. Educating HCT survivors with ongoing care by their primary care providers will be an important element in post-transplant care. In general, coordinating the shared care of a cancer survivor between oncology and primary care appears insufficient because of primary care providers’ knowledge, attitudes and comfort in caring for cancer survivors16,17. With effective and efficient communication, this strategy is likely a feasible option to achieve.\n\nSeveral transplant centers have adopted the concept of a dedicated clinic of providers that focus on the long-term effects of cancer and previous treatment and deliver education on healthy lifestyle behaviors to reduce complications and lower the risk of additional cancers, while the non-transplant oncologist focuses on cancer surveillance. Our study suggests the potential benefit of having dedicated providers for HCT survivors as the providers’ interest, enthusiasm and additional training would likely lead to a comprehensive review of all important clinical assessment in a well-organized fashion.\n\nSeveral professional oncology organizations have proposed a SCP in a denotable template and web applications with data storage (e.g., The American Society of Clinical Oncologists, Journey Forward and OncoLink) to assist in developing a survivor’s own care plan and delivering health education. It is critical to allow the survivors to be actively involved in their SCP. One other potential approach would be to have patients complete their clinical assessment questionnaire prior to their visit; this questionnaire would allow the provider to focus on clinical assessment and counseling rather than data collection. In addition, the electronic medical record could be customized to ensure a comprehensive assessment by providing clinicians with a SCP to-do list.\n\nOur analysis also supports the notion that some categories in assessment (eg, laboratory testing for liver function, renal function and blood pressure measurement) were completed 100% of the time as simply a result of routine practice. Further work is needed to examine how the electronic medical record can more efficiently automate these measures.\n\nOur analysis may be limited in that it is a retrospective chart review at a single transplant center a few years after guidelines were published. It captures one time point in the management of HCT survivors who are likely followed by their primary care clinic in addition to our transplant clinic; important clinical issues during other visits or by their primary care providers may have occurred. Given there is currently no standard for being an expert in survivorship care, providers also self-reported their expertise in survivorship care. Another factor that has to be taken into account in this chart review is the comprehensiveness of the documentation of topics addressed at these visits. For example, mental health screening may have occurred; however, if not documented, the screening was not captured. Nonetheless, this study is the first step to recognize how effectively HCT survivorship care is being provided. Despite these limitations, this study demonstrates that providers need more education in long-term complications of cancer treatment, as well as strategies and tools about how to implement these screening recommendations in a timely, efficient manner given the other demands of their clinical practice.\n\nIn conclusion, post-transplantation care by survivorship-focused providers was more likely to achieve the long-term screening recommendations set forth by experts from international transplant professional societies; however, there is room for improvement in adherence to the guidelines even within this group of providers. Our study highlights the critical need of survivorship education for providers as well as better tools to automate this process. Given the complexity of caring for HCT survivors, new comprehensive and efficient tools to improve adherence to these guidelines are needed to provide optimal survivorship care.\n\n\nData availability\n\nDataset 1: Underlying study data with data dictionary 10.5256/f1000research.15633.d21588518",
"appendix": "Grant information\n\nThe author(s) declared that no grant were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1: Study questionnaire.\n\nClick here to access the data.\n\nSupplementary Table 1: Overall and individual system scores by survivorship-focused vs. other providers.\n\nClick here to access the data.\n\nSupplementary Table 2: Overall and individual system scores by provider’s gender.\n\nClick here to access the data.\n\nSupplementary Table 3: Overall and individual system scores by provider’s experience.\n\nClick here to access the data.\n\n\nReferences\n\nPasquini MC, Zhu X: Current uses and outcomes of hematopoietic stem cell transplantation: CIBMTR Summary Slides. 2015. Reference Source\n\nSun CL, Francisco L, Kawashima T, et al.: Prevalence and predictors of chronic health conditions after hematopoietic cell transplantation: a report from the Bone Marrow Transplant Survivor Study. Blood. 2010; 116(17): 3129–3139; quiz 3377. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartin PJ, Counts GW Jr, Applebaum FR, et al.: Life expectancy in patients surviving more than 5 years after hematopoietic cell transplantation. J Clin Oncol. 2010; 28(6): 1011–1016. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMajhail NS, Rizzo JD, Lee SJ, et al.: Recommended screening and preventive practices for long-term survivors after hematopoietic cell transplantation. Biol Blood Marrow Transplant. 2012; 18(3): 348–371. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHewitt M, Greenfield S, Stovall E, et al.: From Cancer Patient to Cancer Survivor: Lost in Transition. Washington, DC: National Academies Press, 2005. Publisher Full Text\n\nHewitt M, Ganz P: Implementing Cancer Survivorship Care Planning: Workshop Summary. Washington, DC: National Academies Press, 2007. Publisher Full Text\n\nMcCabe MS, Partridge AH, Grunfeld E, et al.: Risk-based health care, the cancer survivor, the oncologist, and the primary care physician. Semin Oncol. 2013; 40(6): 804–812. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrothers BM, Easley A, Salani R, et al.: Do survivorship care plans impact patients' evaluations of care? A randomized evaluation with gynecologic oncology patients. Gynecol Oncol. 2013; 129(3): 554–558. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrunfeld E, Julian JA, Pond G, et al.: Evaluating survivorship care plans: results of a randomized, clinical trial of patients with breast cancer. J Clin Oncol. 2011; 29(36): 4755–4762. PubMed Abstract | Publisher Full Text\n\nNicolaije KA, Ezendam NP, Vos MC, et al.: Impact of an Automatically Generated Cancer Survivorship Care Plan on Patient-Reported Outcomes in Routine Clinical Practice: Longitudinal Outcomes of a Pragmatic, Cluster Randomized Trial. J Clin Oncol. 2015; 33(31): 3550–3559. PubMed Abstract | Publisher Full Text\n\nParry C, Kent EE, Forsythe LP, et al.: Can't see the forest for the care plan: a call to revisit the context of care planning. J Clin Oncol. 2013; 31(21): 2651–2653. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMayer DK, Birken SA, Check DK, et al.: Summing it up: an integrative review of studies of cancer survivorship care plans (2006-2013). Cancer. 2015; 121(7): 978–996. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOeffinger KC, McCabe MS: Models for delivering survivorship care. J Clin Oncol. 2006; 24(32): 5117–5124. PubMed Abstract | Publisher Full Text\n\nKenzik K, Pisu M, Fouad MN, et al.: Are long-term cancer survivors and physicians discussing health promotion and healthy behaviors? J Cancer Surviv. 2016; 10(2): 271–279. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnyder CF, Frick KD, Herbert RJ, et al.: Comorbid condition care quality in cancer survivors: role of primary care and specialty providers and care coordination. J Cancer Surviv. 2015; 9(4): 641–649. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCabe MS, Bhatia S, Oeffinger KC, et al.: American Society of Clinical Oncology statement: achieving high-quality cancer survivorship care. J Clin Oncol. 2013; 31(5): 631–640. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPotosky AL, Han PK, Rowland J, et al.: Differences between primary care physicians' and oncologists' knowledge, attitudes and practices regarding the care of cancer survivors. J Gen Intern Med. 2011; 26(12): 1403–1410. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThaw SS, Holtan S, Cao Q, et al.: Dataset 1 in: Inadequate survivorship care after allogeneic hematopoietic cell transplantation: A retrospective chart review. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15633.d215885"
}
|
[
{
"id": "37913",
"date": "27 Sep 2018",
"name": "Asmita Mishra",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors of this study aimed to retrospectively evaluate the utilization of published long-term follow-up guidelines from 2012 for adult allogeneic transplant recipients. This was evaluated in a single institution amongst the fifteen providers at that location.\nThe authors conclude the following\nSurvivorship focused providers are more likely to adhere to the recommendations of the guidelines However overall adherence is incomplete amongst all providers and thus further efforts are needed to improve upon this\n\nSuggestions\nOne of the major limitations of this study includes limited study period. At least 2+ years of additional patient volume can be added to this analysis that is likely to further clarify areas that are routinely being limited and if there has been any performance improvement in survivorship assessment.\nDetails regarding clinical practice may also be helpful. While authors do note that patients were evaluated at 2 year followup, it would be helpful to know standard practice at the institution i.e. are patients transitioned back to oncologists, primary care etc for followup as several of the long term followup needs as noted by the guidelines, are apart of routine non-transplant long term management, thus reflecting the lack of adherence to all 13 categories. This is alluded to in the discussion regarding proposed strategies.\nFor provider year of practice ≥10 years vs <10 years was utilized as cutpoint, however the guidelines themselves have been out for ~ 6 years. A difference based on more recent adapters i.e. 5 years may be a better cut off as their practice may be different than those who have been practicing a certain way even prior to guidelines being updated.\nThe authors noted that there were gender differences in the survivorship care. What did the authors think was the etiology of this?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "45800",
"date": "08 Apr 2019",
"name": "Mehrdad Hefazi",
"expertise": [
"Reviewer Expertise Allogeneic HCT",
"GVHD",
"late effects and survivorship"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript is aimed at answering an important question in the field of survivorship care after allogeneic HCT. Authors have used a novel and interesting approach to quantify the rate of compliance with recommended guidelines. However, there are several major issues in their methodology:\n\nFirst, authors have used provider gender, provider years of practice, and providers’ focus of practice (survivorship-focused or not) as independent variables. However, considering that survivorship care is provided longitudinally and often by different providers over time, categorizing subjects based on these variables is almost impossible. Which category do we put a patient in if they were seen twice by a female provider and three times by a male provider over the course of two years? Unless patients were seen exclusively by the same providers for the entire period, I do not think we can categorize them accurately based on these characteristics. We cannot take only one visit into account either as survivorship care involves more than a single visit.\n\nSecond, for overall score assessment, authors have added one point for each organ system if at least one question in the category for that organ was addressed. The questions grouped together in each category are of very different importance though, and they cannot be a substitute for each other. For example, in the Respiratory System, there are four questions, and one of them is “Clinical Exam,” which is very likely to be documented in almost any clinicians’ note. Taking this alone as an overall assessment of adherence to survivorship guidelines for the respiratory system will be rather misleading as we can have a group of patients with 0% adherence to PFTs, who are marked as 100% compliant with overall respiratory survivorship care just because they all had a clinical exam.\n\nThird, missing data were analyzed as not being addressed, even though this is not an accurate assumption. Some aspects of survivorship care such as “inquiry about family functioning” or “Inquiry about sexual functioning” are inherently less likely to be documented in the records, whereas other components such as renal function or liver function tests almost always get recorded and can be captured retrospectively. Authors should distinguish these from each other, and either report missing data separately or state that assessment of adherence in certain areas were not possible.\n\nAt the end, an alternative scoring system (or a different reporting methods) that is based on more objective and specific measures instead of lumping together a group of questions with different clinical significance and rates of documentation would be a more reliable and informative approach. For instance, separately reporting on the rate of compliance with PFTs, measurement of bone densitometry, colonoscopy, or referral to ophthalmology is far more informative and reliable than providing a summative score for how many questions in each organ system were addressed.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1389
|
https://f1000research.com/articles/7-1379/v1
|
03 Sep 18
|
{
"type": "Case Report",
"title": "Case Report: Ostium secundum atrial septal defect with unilateral lung hypervascularity revealing associated right pulmonary artery stenosis",
"authors": [
"Mehdi Slim",
"Malick Bodian",
"Elies Neffati",
"Essia Boughzela",
"Malick Bodian",
"Elies Neffati",
"Essia Boughzela"
],
"abstract": "Background: Atrial septal defect (ASD) is often an isolated disease, but its association with other abnormalities can make diagnosis challenging. Careful analysis of simple complementary exams can help precise anatomical diagnosis ensuring suitable treatment. The aim of this article is to report, from a case report and literature review, diagnostic challenges and the contribution of simple complementary exams, such as chest X-ray, for the diagnostic orientation of an ASD associated with peripheral pulmonary artery stenosis, as well as therapeutic particularities. Case report: We report the case of a girl born in 2007, with history of dyspnoea and recurrent bronchitis in whom a loud systolic murmur was detected fortuitously at the age of 2 years. Her clinical examination was otherwise normal. The electrocardiogram recorded sinus rhythm, incomplete right bundle branch block, and right ventricular hypertrophy. Chest X-ray showed moderate cardiomegaly and hypervascularity of the left lung field contrasting with reduced blood flow to the right lung. Doppler echocardiography revealed a wide ostium secundum ASD, right chamber volume overload and right pulmonary artery stenosis. The latter was confirmed by CT angiography and right cardiac catheterization. The patient underwent percutaneous right pulmonary artery dilation with stent placement. Control chest X-ray noted bilateral hypervascularity of the lung. The ASD was closed percutaneously one year later. The outcome was uneventful. Conclusion: The combination of ASD with pulmonary artery stenosis limits pulmonary hyperflow. In our case, this stenosis was tight and sat on the right branch of the pulmonary artery reducing significantly blood flow to the ipsilateral lung. Careful chest X-ray analysis may suggest diagnosis, which can be confirmed by ultrasounds and if necessary, by further examination, allowing treatment adaptation. To our knowledge, this association is very rare and no similar case has been reported.",
"keywords": [
"Atrial septal defect",
"pulmonary stenosis",
"echocardiography",
"angioplasty",
"cardiac catheterization."
],
"content": "Introduction\n\nAtrial septal defect (ASD) is a very common congenital heart disease1,2. It can be associated with other cardiovascular abnormalities; the most common is pulmonary stenosis2. The latter usually concerns the valve or the right outflow tract but rarely pulmonary artery branches. This unusual association can be suspected by careful analysis of complementary exams. Currently, advances in interventional treatment make possible reliable and effective treatment of ASD even when associated with other lesions, in particular pulmonary stenosis3.\n\nThe aim of this article is to report, from a case report and literature review, diagnostic challenges and the contribution of simple complementary exams, such as chest X-ray, for the diagnostic orientation of an ASD associated with peripheral pulmonary artery stenosis, as well as therapeutic particularities.\n\n\nCase report\n\nWe report the case of a girl born in 2007, with history of dyspnoea and recurrent bronchitis in whom a systolic murmur was detected in our outpatient office at the age of two years. The physical examination at this age noted a non-dysmorphic child with normal growth and psychomotor development. Auscultation of the pulmonary area noted loud 3/6 systolic ejection-type murmur, splited second heart sound with a marqued pulmonary component. An electrocardiogram recorded sinus rhythm, incomplete right bundle branch block and right ventricular hypertrophy.\n\nChest X-ray showed moderate cardiomegaly with cardio-thoracic ratio of 0.53, convex mid-left arch, and above all marked hypervascularity of the left lung contrasting with reduced blood flow to the right lung (Figure 1).\n\nCardiomegaly, convex middle arch and unilateral left lung hypervascularity contrasting with reduced blood flow to the right lung is shown.\n\nDoppler echocardiography noted a 20 mm diameter ostium secundum ASD with right chamber volume overload associated with right pulmonary artery (RPA) stenosis. Diagnostic confirmation of peripheral pulmonary branch stenosis was made by CT scan and right heart catheterization (Figure 2).\n\n(A) Tight stenosis at the origin of the right pulmonary artery (red arrow); (B) small right pulmonary artery (15 mm) contrasting with (C) dilated left artery (27 mm).\n\nA two-step percutaneous treatment for these lesions was decided. RPA stenosis was treated firstly with 8 mm × 2 cm balloon in 2012 with poor initial result. A novel attempt in the same year with a balloon and stent placement (Express™ Vascular LD 10*37mm) was successful (Figure 3).\n\n(A) Selective angiography of the right pulmonary artery and (B) stent deployment.\n\nThis result was optimized 6 months later through a 18 mm × 20 mm balloon leaving mild residual gradient of 10 mm Hg between pulmonary trunk and RPA. The ASD was closed successfully 1 year later in July 2013 with a 24 mm Figulla Flex II prosthesis. The procedure was uneventful and fluoroscopic control at the end noted ASD prosthesis in place and stent at RPA level (Figure 4). Control chest X-ray showed symmetrical bilateral vascularisation of the two lungs (Figure 5). Outcome was favourable. Control echocardiography performed at 4 years of regular follow-up noted mild residual pulmonary stenosis (maximal residual gradient of 15 mmHg), no stent restenosis and a well-sealed ASD prosthesis (Figure 6). Systolic right ventricle function indices were normal.\n\nAtrial septal defect prosthesis (red arrow) and the right pulmonary artery stent (green arrow).\n\nSymmetrical pulmonary vasculature and right pulmonary artery stent shown by red arrow.\n\n\nDiscussion\n\nOur case emphasizes perfectly the importance of careful basic semiology analysis in the diagnosis process of congenital heart disease. In fact, heart murmur characteristics and asymmetric pulmonary vasculature in chest X-ray oriented the diagnosis of ASD associated with pulmonary branch stenosis4. Confirmation was made by appropriate investigations, particularly Doppler echocardiography, thoracic CT angiography and finally cardiac catheterization with selective angiograms. Chest roentgenogram still retains great value for the diagnostic process in cardiology. Thus in our patient, the finding of an unbalanced pulmonary vasculature especially when associated with an intense pulmonary murmur oriented the diagnosis of pulmonary artery branch stenosis. RPA stenosis caused pulmonary flow deviation mainly to the healthy side resulting in increased vascularisation of the left lung contrasting with a hypovascularity of the right one. This unilateral hyper-flow was quite marked because of its association to relevant left to right shunt related to wide-associated ASD. This suspicion of RPA stenosis was easily confirmed by echocardiography, CT scan, and right-heart catheterization with measure of pressure in right heart chambers, pulmonary branches and finally selective pulmonary artery branches angiographies.\n\nConventionally, treatment of this condition was surgical with ASD closure and pulmonary artery branch plasty. Currently, balloon dilatation with stent placement has revolutionized management of pulmonary stenosis especially those involving branches5,6. Pulmonary artery stenosis can complicate the course of many congenital heart diseases. Percutaneous treatment can be performed as a surrogate or adjunct to surgery and it is considered as standard of care for proximal stenosis. For distal stenosis, it allows treatment of lesions inaccessible to the surgeon, often in addition to repair surgery of right ventricular outflow tract3. Angioplasty of the pulmonary arteries has evolved considerably since its introduction in the early 1980s. High pressure balloons usually are 2 to 4 times larger than the diameter of the stenosis are used. Stents used are still currently most often not premounted and have the advantage of being expandable to a diameter sufficiently close to vessel size in adulthood7. This type of stent was successfully used in our patient and allowed restoration of pulmonary vasculature by removing the peripheral pulmonary stenosis. The success of pulmonary dilatation authorized percutaneous closure of the ASD, which was performed successfully one year later by prosthesis. Percutaneous closure is currently the standard treatment for ostium secundum ASD with adequate rims and diameter less than 38 mm with a success rate close to 100% and lower morbidity compared to surgery.\n\nOur case is very rare and to our knowledge, no similar cases have been reported. It proves feasibility and reliability of percutaneous treatment for such a case. The sequence of the lesion treatment is dictated by lesions complexity whose failure can shift the case to surgery. This is the reason why we waited obtaining a satisfactory and stable result on pulmonary artery stenosis before treating ASD.\n\n\nConclusion\n\nPulmonary stenosis can be associated with ASD limiting pulmonary hyper-flow. In our case, this stenosis was tight and sat on the origin of the right branch, which resulted in reducing significantly blood flow to the ipsilateral lung. Therefore, ASD-related pulmonary hyperflow was directed to the left lung field, explaining the radiological aspect particularly unilateral hypervascularity. Careful chest X-ray analysis can allow suspicion of pulmonary artery branch stenosis. Confirmation can be made by Doppler echocardiography and, if necessary, by further examination allowing treatment adaptation. Management of this association benefited from interventional techniques progress allowing successfully treatment with stable long-term outcome. Indeed, with a follow-up of four years, the atrial septum was tight and there was no residual pulmonary stenosis with a normalized RV function.\n\n\nConsent\n\nWritten informed consent for publication of the clinical details and images was obtained from the patient's father.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nBrickner ME, Hillis LD, Lange RA: Congenital heart disease in adults. First of two parts. N Engl J Med. 2000; 342(4): 256–63. PubMed Abstract | Publisher Full Text\n\nHouyel L: Communications interauriculaires. Cardiologie. [Internet]. 2002. Reference Source\n\nFraisse A, Kammache I: Traitement interventionnel des vaisseaux. Archives of Cardiovascular Diseases Supplements. 2011; 3(2): 163–72. Publisher Full Text\n\nFouron JC, Favreau-Ethier M, Marion P, et al.: [Congenital peripheral pulmonary stenosis. Presentation of 16 cases and review of the literature]. Can Med Assoc J. 1967; 96(15): 1084–94. PubMed Abstract | Free Full Text\n\nNarayan HK, Glatz AC, Rome JJ: Bifurcating stents in the pulmonary arteries: A novel technique to relieve bilateral branch pulmonary artery obstruction. Catheter Cardiovasc Interv. 2015; 86(4): 714–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoudjemline Y, Legendre A, Ladouceur M, et al.: Branch pulmonary artery jailing with a bare metal stent to anchor a transcatheter pulmonary valve in patients with patched large right ventricular outflow tract. Circ Cardiovasc Interv. 2012; 5(2): e22–25. PubMed Abstract | Publisher Full Text\n\nMcMahon CJ, El-Said HG, Grifka RG, et al.: Redilation of endovascular stents in congenital heart disease: factors implicated in the development of restenosis and neointimal proliferation. J Am Coll Cardiol. 2001; 38(2): 521–6. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "41153",
"date": "10 Dec 2018",
"name": "P. Syamasundar Rao",
"expertise": [
"Reviewer Expertise Congenital heart disease and Interventional Pediatric cardiology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper reports a rare co-occurrence of atrial septal defect and branch pulmonary artery stenosis and is worthy of eventual indexing.\nThe expression and syntax should be improved by editing, by someone who is proficient in English.\nThe assertion that \"no similar case has been reported\" should be removed. In the past, the association of atrial septal defect and branch pulmonary artery stenosis was reported (Lloyd et al., 19941 and Rao et al., 19952), although the subject of these papers did not allow for detailed description.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "51459",
"date": "12 Aug 2019",
"name": "Kartik Patel",
"expertise": [
"Reviewer Expertise Pediatric and Adult Cardiac Surgery"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI congratulate authors for the successful management of this case. This case report is well written still I have some comments for the same.\nIt requires editing by someone who is proficient in English.\n\nCan authors describe what would be the probable reason for narrowing of right pulmonary artery ostia? what was the position of the arch of aorta, right or left?\n\nLooking at follow up image the stent seems to be protruding into the main pulmonary artery. Does there are any signs of hemolysis?\n\nCan authors explain about the approach to the patients with ostial right pulmonary artery stenosis and advantage vs disadvantages of surgery vs stent?\n\nI also have doubt of considering the lesion as peripheral branch stenosis of pulmonary artery. I think better word would be right pulmonary artery stenosis or RPA ostial stenosis.\n\nWhy authors have decided to do two stage approach? why not single stage approach?\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1379
|
https://f1000research.com/articles/7-683/v1
|
31 May 18
|
{
"type": "Research Article",
"title": "Reproductive performance of asian catfish (Hemibagrus wyckii Bagridae), a candidate species for aquaculture",
"authors": [
"Netti Aryani",
"Indra Suharman",
"Hafrijal Syandri",
"Indra Suharman",
"Hafrijal Syandri"
],
"abstract": "Background: Hemibagrus wyckii Bagridae is one of the most important economic fish species that lives in the rivers and reservoir in Riau Province, Indonesia. The present study aimed to determine the reproductive performance of H.wyckii under culture conditions. Methods: A total of 10 female and 10 male fish were selected, and weight, length, characteristics of egg and sperm, and hatchery performance were measured. Eggs were fertilized using the dry method. Egg weight and egg diameters were measured for 50 eggs per female. Egg size (50 eggs for each fish) was measured using an Olympus microscope (CX40). Then, saline solution was added over the eggs, followed by the addition of pooled sperm from 10 males. Results: Average relative fecundity, egg weight and egg diameter were 2060±512 eggs/kg fish, 29.86±1.21 mg and 2.67±0.26 mm, respectively. The fertilization rate and hatching rate were 60.91±4.68% and 42.91±2.92% respectively. Sperm characteristics such as volume per fish (mL), pH, concentration (per mL), motility (%) and duration of motility (second) were 0.82±0.20, 7.15±0.12, 3.68±0.15, 72.77±1.46 and 47.5±4.84, respectively. Conclusion: The study results and scientific observations regarding reproductive performance suggest that H. wyckii can be considered a new candidate species for aquaculture.",
"keywords": [
"Hemibagrus wyckii",
"endangered species",
"alternative fish species",
"egg quality",
"sperm"
],
"content": "Introduction\n\nIn Indonesia, the fisheries sector plays an important economic role through income generation, diversification of livelihoods, supply of animal proteins, and foreign exchange earnings1. During recent decades, in the freshwater aquaculture sector, the prioritized species for culture were Clarias, Pangasius, Tilapia, Common carp and Giant gourami. Wild fish species in rivers, reservoirs and lakes have not been prioritized for aquaculture operations.\n\nIn the Riau Province, there are three rivers, the Kampar Kanan, Kampar Kiri, and Siak rivers, and the Koto Panjang Reservoir. The Kampar Kanan river hosts up to 34 fish species2, the Kampar Kiri river hosts up to 86 fish species3, the Ukai river, a branch of Siak River hosts up to 31 fish species4 and Koto Panjang Reservoir hosts up to 26 fish species5.\n\nHemibagrus wyckii (Bagridae) is one of the most important economic fish species that lives in the rivers and reservoir in Riau Province. H.wyckii (its local name is geso) is a carnivorous freshwater finfish native to Indonesia2,6. H. wyckii has been categorized as of “least concern” by the International Union for Conservation of Nature (IUCN). However, H.wyckii in the Kampar Kanan river was categorized as a vulnerable to endangered species7,8.\n\nDue to the endangered population of H. wyckii, it is necessary to domesticate this species as an aquaculture candidate in the future. Therefore, the present study aimed to determine the reproductive performance of H. wyckii as a potential species under culture conditions to provide preliminary scientific information and evaluation.\n\n\nMethods\n\nWhile H. wyckii is classified as vunerable to endangered in the Kampar Kanan river, the Government of the Republic of Indonesia does not require licences to be obtain to capture and rear this species, hense no licences are applicable to this study. No animals suffered as a result of the activities of this study. H. wyckii was transported to the pond farm for rearing, injection, ovulation, stripping and sperm production. In the end of the experiment the H. wyckii still in good condition until return back to the pond.\n\nBroodfish of H. wyckii were collected from upstream areas of the Kampar Kanan river in the Kouk village (0° 19' 23.44˝ N and 100° 56' 40.05˝ E), Kampar Regency, Riau Province. The broodfish kept in oxygenated polythene bag and transported by truck to Sadarlis Green Catfish Farm, Sungai Paku, Kampar Regency, Indonesia. Then, the broodstock of H. wyckii had been adapted and grown to maturation under the farm conditions. Prior to stocking, female and male fish were weighed using balance scale (OHAUS model CT 6000-USA), and their lengths were measured using a meter ruler with 0.01mm accuracy. During the grow-out period, fish were cultured in two ponds (4 x 4 x 3 m) separated by sex. The depth of water in each pond was 2.0 m. The inlet of the pond water come from Sungai Paku reservoir at a rate of 2.0 m3 per sec. The broodfish fed with freshwater seashell meat (Pilsbryoconcha exilis; Unionidae) collected from local fisherman near to Sungai Paku Reservoir. The seashell meat was kept at cold box with the temperature 5°C prior given to broodfish. According to Aryani et al.6 the proximate composition (% wet weight base) of the seashell meat was 89.37% moisture, 7.08% crude protein, 0.82% fat, 0.29% crude ash and 2.44% carbohydrate. The total of seashell meat given to broodfish everyday were 2.500g per pond (equivalent with 9% body weight of population). The feeding time at 17:00 pm due to carnivorous broodfish. The average weight and length from ten (10) of the female broodfish were 2,669.4±486.917 g and 62.84±8.20 cm, respectively. Meanwhile, the ten (10) of male broodfish were 1,769.1±401.10 g and 54.52±7.17 cm, respectively.\n\nThe fish were checked monthly for ovulation and semen production from mid November, 2017 onwards. The broodfish were captured with a gillnet formed into a net bag with the appropriate mesh size and anesthetized orally with Tricaine methanesulfonate (MS-222, ethyl 4-aminobenzoate methanesulfonate 98%, Sigma Aldrich Co, USA, MO; 50 mg L-1), based on the dosage used for Solea senegalensis9. Oocyte maturation was assessed for each individual. The fish were returned to their pond after evaluation, and no mortality occurred. Fish were fasted 48 h prior to the evaluation. Oocytes sampled in vivo were taken from females using the method described by Nowosad et al.10 and were placed in Serra's solution (6:3:1 of 70% ethanol, 40% formaldehyde and 99.5% acetic acid) for clarification of the cytoplasm. After 5 min, the position of the stage IV oocyte nucleus was determined using criteria by Krejszeff et al.11 and was classified as germinal vesicle in the periphery or germinal vesicle breakdown (GVBD).\n\nH. wyckii was categorized as endangered species and difficult to obtain in Kampar Kanan river. A total of 10 mature have been eligible for the experiment. 10 mature females that had oocyte stage IV were sampled from broodstock from 3rd week of February to March 2018 in the same farm, and live weights (FeW) and total lengths (FeL) were measured after anesthetization with 0.50 mg L-1 MS-2229. For ovulation, each female broodfish received two injections of GnRH analogs with a dopamine antagonist (Ovaprim) (manufactured for Syndel Laboratories Ltd, 2595 McCullough Rd. Nanaimo, B.C.V9S 4M9 Canada) applied intraperitoneally under the left pectoral fin. The first injection was 0.2 mL kg BW-1 and the second was 0.6 mL kg BW-1 (total 0.8 mL kg BW-1) at 12 h intervals. These dosages refer to the previous doses for ovulation of H. wyckii6. At 18 to 20 h after injection, eggs were stripped into a plastic vessel. Eggs were fertilized using the “dry method” as described by Dabrowski et al.12 Egg weights of each female were by determined weighing 50 eggs to the nearest 0.01 g, and egg diameters were measured to the nearest 0.01 mm. Egg size (50 eggs for each fish) was measured using an Olympus microscope. Then, a balanced saline solution (7.5 g of NaCl, 0.2 g of KCl, 0.2 g of CaCl22H2O, and 0.02 g of NaHCO3, in 1000 mL distilled water) was added over the eggs13, followed by an addition of pooled sperm from 10 males. The eggs were then gently mixed for fertilization and left for three minutes. The fertilized eggs were rinsed several times with incubation water to remove sperm remnants as well as dead and broken eggs. The eggs were left for an additional 25 minutes to facilitate egg hardening by water absorption and disinfected with 100 ppm iodine for 10 minutes. Then, eggs were transferred to incubation trays placed in a vertical hatching system. The water flow rate to each vertical incubator was 3 L minˉ¹. Fifty eggs were randomly sampled at 15 h after fertilization to determine the fertilization rate (FR). The hatching rate (HR) was determined by counting all hatched fry.\n\nMales were stimulated with a half-dose of the same hormonal preparations used to stimulate the females. Semen samples were obtained from 10 fish randomly selected from the farm. The male fish were anesthetized with 50 mg Lˉ¹ of MS-222. The doses of the anaesthetic agents were prepared a few minutes before each experiment based on the methods of Weber et al.9, and then, weights (MaW) and total lengths (MaL) were measured. Special care was taken to avoid any contamination of semen with urine, feces, mucus and water. Semen samples were collected using plastic syringes in 3 mL aliquots, then placed in an insulated ice-cooled container, transported to the laboratory and analyzed within 2 h.\n\nThe sperm evaluation included gross (visual) and microscopic examination (as reviewed by Rurangwa et al.14, and Cabrita et al.15. The gross examination was based on visual and physical observation of parameters such as the semen volume by collecting the semen in a graduated cylinder and determining the level in milliliters. The microscopic examination was carried out using an Olympus model CX40, with magnification between X 10 and X 25 to determine other parameters such as: motility (duration and percentage). Motility (MO) percentage and duration were determined by observing water activated semen placed on a glass slide under a microscope. Motile sperm were observed and expressed as a percent of non-moving sperm. Motility duration (DMO) was determined as the period between movements of the sperm to cessation of any progressive movement expressed in seconds. Sperm concentration (SC) was measured under a microscope using Neaubeaur’s hemocytometer and calculated as the number of sperm mlˉ¹16. Semen pH was determined with a hand pH meter (HI8424 Hanna Instruments, USA).\n\nThe water temperature of the farm was measured with a thermometer (Celsius scale), and water samples were collected to determine the dissolved oxygen (DO) concentrations. An oxygen meter (YSI model 52, Yellow Spring Instrument Co., Yellow Springs, OH, USA) was used in situ, and pH values were determined with a pH meter (Digital Mini-pH Meter, 0-14PH, IQ Scientific, Chemo-science Thailand Co., Ltd, Thailand). Alkalinity and hardness levels of the water were measured in each replicate according to standard procedures17. The water quality parameters were measured once per month.\n\nResults were given as the means ± SD. Simple linear regression analyses were performed using SPSS software (version 16.0 for Windows; SPSS Inc., Chicago, IL). The standard deviation of each parameter was determined. For linear regression analysis, significant correlations were considered at p<0.05.\n\n\nResults\n\nDescriptive measurements and the reproductive performance of female H.wyckii are presented in Table 1. Fifty percent of eggs hatched at 60 h (29–30 °C water temperature). The fertilization rate varied between 53.2 and 68.3%, whereas the hatching rate was between 39.5 and 48.3%.\n\nCharacteristics of male fish and sperm samples are presented in Table 2. The average live weight of the males is 1,769.1±401.1g. Male H.wyckii are found to be slightly smaller than females. In the genital maturation stage, the papilla is not prominent for all male fish as the second sexual characteristic of the other Hemibagrus.\n\nAccording to the analysis of the linear relationship (r2) between variables of H. wyckii females shown in Table 3, there was a strong linear relationship between AF and FeW, AF and RF, HEW and EW, HW and EW, HW and HEW, and HR and FR. In contrast, the analysis of the linear relationship (r2) between variables of H. wyckii males shown in Table 4, show a strong linear relationship between MaW and MaL, MaW and GW, and MaL and GW.\n\nStatistically important at r2 > 0.500 (underlined)\n\nFew: Female fish weight, FeL: Female fish length, AF: Absolute fecundity, RF: Relative fecundity, EW: Egg weight, HEW: Hardened egg weight, EWI: Eggs weight increase, ED: Egg diameter, HED: Hardened egg diameter, HDI: Hardened egg diameter increase, FR: Fertilization rate, HR: Hatching rate, HW: Hatching weight, GI: Gonadosomatic index.\n\nStatistically important at r2 > 0.500 (underlined)\n\nMaW: Male fish weight, MaL: Male fish length, GW: Gonada weight, GI: Gonadosomatic index, MV: Semen volume, SC: Sperm concentration, MO: Motility, DMO: Duration of motility.\n\nThe temperature of the pond water ranged from 28°C to 29°C, oxygen ranged from 6.5 mg L-1 to 6.7 mg L-1, pH ranged from 6.5 to 6.8, alkalinity ranged from 42.97 mg L-1 to 57.33 mg L-1 and hardness ranged from 104.83 mg L-1 to 110.51 mg L-1.\n\n\nDiscussion\n\nIn our study, the spawning period of H. wyckii started at the 3rd week of February and continued until the final examination in March. However, our monthly observations of H.wyckii captured by fishermen in the Kampar Kanan river found fish at the gonadal I, II, III and IV stages of development (stage scale by Krejszeff et al11), showing a spawning type shows of a partial spawner. Spawning in the wild occurs at the start and end of the rainy season, or this species could spawn twice per year. The spawning type of H. wyckii is the same as that of Hemibagrus nemurus18. The duration from fertilization to a 50% hatching rate was 60 h with 29°C to 30°C water temperature. The fertilization rate and hatching rate of various Hemibagrus species are reported in Table 5. When findings are compared with other Hemibagrus species, the duration of hatching is longer in H.wyckii. In other words, H.wyckii has species specific characteristics the regarding hatching rate, because the egg diameter was bigger than other Hemibagrus eggs (Table 5).\n\nAFs of H.wyckii were between 4125 and 9958 eggs/fish and RFs were between 1400 and 3000 eggs kgˉ¹. Egg production per kg fish (RF) is thought to be more informative than absolute fecundity. RF values of H.wyckii were lower compared with those of H.nemurus19–21. In our study, there was no strong linear relationship between RF and fish size. Meanwhile, there was a strong linear relationship between AF and fish size (Table 3).\n\nEDs and EWs obtained here ranged between 2.10 and 2.86 mm and 24.4 and 31.8 mg respectively, consistent with those reported by Aryani et al.6 At the end of the hardening process, the increases in egg weight and diameter were 29.86 and 14.9% respectively. In the present study, there were strong linear relationships between EW and HEW, and between EW and HW (Table 3). Lahnsteiner and Patzner22 state that egg weight increases after the hardening process and is linearly correlated with the viability of eggs in Rainbow trout, but not in Alakir trout23.\n\nIn this study, the FRs of H. wyckii were higher than those of H.wyckii from research conducted by Aryani et al.6. This suggests that we have improved the method for the fertilization process of sperm and eggs so that hatching rate can increase. Nevertheless, the fertilization rate of H.wyckii was lower than that of H.nemurus17,24,25 (Table 5). When HR values are compared with other Hemibagrus species, HR levels are lower than those of Aryani and Suharman14, Adebiyi et al.25 and Suhenda et al.21 but higher than those of Aryani et al.6 There were strong positive correlations with FR and HR (r2 = 0.91) (Table 3). Meanwhile, FR and HR were not positively correlated with ED (Table 3).\n\nGonadotropins (GTHs), follicle-stimulating hormone (FSH), luteinizing hormone (LH), and sex steroids are the key regulators of reproduction26–28. Moreover, numerous circulating endocrine and locally acting paracrine and autocrine factors regulate the various stages of oocyte development and maturation29,30. Other factors that significantly affect fish eggs are genetic, environmental and stress factors31–33. However, there is no information about the effects of such kinds of factors on the embryonic development of H.wyckii. There are also further information requirements concerning hatchery management of H.wyckii such as feed levels and feed type6. During the adaptation period in the present instance, H. wyckii were fed with meat freshwater seashell (Pilsbryoconcha exilis; Unionidae), the local name of which is “lokan” which may not be fully suitable for this species, even though this species is carnivorous. The requirement of a balanced feed that meets the nutritional requirements of species being cultured34–36 and application of a proper feeding program during the ovarian development37 have been emphasized. Suhenda et al.21 reported that diet should be offered to H. nemurus broodstock at an 8% lipid level in fish feed with 35% crude protein for 4 months to obtain high quality gametes. Meanwhile, Aryani and Suharman20 suggest that a minimum of 32% crude protein should be included in the diet of female H.nemurus broodstock. Additionally, implantation of 17ß-estradiol has also managed to improve the reproductive performance of H.nemurus19. These issues in H.wyckii are yet to be elucidated. Therefore, such research is very important for H.wyckii in the future.\n\nAverage SVs determined in H. wyckii are higher than those in H.nemurus (0.10 to 0.35 mL)18 and Clarias gariepinus38, but lower than those reported for Ictalurus punctatus39. It appears that the semen volume in other species has a positive relationship with sperm concentration, including in Ictalurus punctatus39. Meanwhile, in fish farms and hatcheries, the biotic and abiotic factors that affect sperm quality are diverse and dependent on complex interactions between genetic, physiological and environmental factors14. On the other hand, improvements in broodstock nutrition and feeding greatly improve gamete quality and larvae production40.\n\nSemen pH values of H.wyckii are consistent with those of other species, including Barbus grypus41 and Carrasius gibelio42. The sperm motility of H.wyckii between 70.2 and 75.50%, and the duration of motility was between 40.0 and 54.0 sec, results that are consistent with Ictalurus punctatus39. According to Effer et al.43 the duration of sperm motility in fish depends on the temperature of the activation medium. Sperm of H.wyckii had an effective fertilizing ability according to the correlation analysis, which did not detect any significant relationship between FR and sperm parameters. However, there was a positive relationship between MO and DMO (r2 = 0.49). Sperm morphology, density, volume, motility and fertilizing capacity, as well as the composition and osmolality of the seminal plasma are parameters commonly measured to assess sperm quality in fish14,44. In this study, we did not investigate the ionic composition of the semen, but this phenomenon could be related to the ionic composition of semen, which has a significant influence on motility and duration of motility45–47.\n\n\nConclusion\n\nIn conclusion, the reproductive performance of H.wyckii under culture conditions is within the range of available data for other Hemibagrus species. Considering that successful larvae production (and potential juvenile growth) was possible using the same methods as in Bagridae. H.wyckii can be an alternative species for aquaculture. However, further studies are clearly required to determine several aspects of this fish under culture conditions.\n\n\nData availability\n\nDataset 1: Data of female size, egg characteristic and hatchery performance of Hemibagrus wyckii 10.5256/f1000research.14746.d20432848",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was funded by a study grant (Riset Dasar Unggulan Perguruan Tinggi) from the Directorate of Research and Community Service, Ministry of Research Technology and Higher Education Republic of Indonesia (No. 311/UN.19.5.1.3/PP/2018).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank the Ministry of Research Technology and Higher Education Republic of Indonesia for supporting this study through the competitive grants schema (Riset Dasar Unggulan Perguruan Tinggi). Appreciation also goes to all of fisherman and students who helped the author during experiments in the farm.\n\n\nReferences\n\nNhuong T, U-Primo R, Chin YC, et al.: Indonesian aquaculture futures: An analysis of fish supply and demand in Indonesia to 2030 and role of aquaculture using the AsiaFish model. Mar Policy. 2017; 79: 25–32. Publisher Full Text\n\nAryani N: Native species in Kampar Kanan river, Riau Province Indonesia. Int J Fish Aquatic Stud. 2015; 2(5): 213–217. Reference Source\n\nSimanjuntak CPH, Rahardjo MF, Sukimin S: Ichthyofauna in floodplain of Kampar Kiri river. Jurnal Ikhtiologi Indonesia. 2006; 6(2): 99–109. Reference Source\n\nKaidir PP: The diversity of fish species in Ukai stream, branch of Siak River Riau. Berkala Perikanan Terubuk. 2011; 1(39): 24–32.\n\nWarsa A, Krismono ASN, Nurfiarini A, et al.: The capture fishery resources in Koto Panjang Reservoir. Bawal. 2009; 2(3): 93–97. Reference Source\n\nAryani N, Suharman I, Ainul M, et al.: Reproductive performance of Asian catfish (Hemibagrus wyckii, Bleeker, 1858)-Preliminary study. Pakistan J Nutr. 2017; 16(7): 550–556. Publisher Full Text\n\nAryani N, Suharman I, Sabrina H: Length-weight relationship and condition factor of the critically endangered fish of Geso, Hemibagrus wyckii (Bleeker, 1858) Bagridae from Kampar Kanan river, Indonesia. J Entomol Zool Stud. 2016; 4(2): 119–122. Reference Source\n\nFithra RY, Siregar YI: Fishes biodiversity of Kampar river an inventory from Kampar Kanan river. J Environ Sci. 2010; 2(4): 139–147. Reference Source\n\nWeber RA, Peleteiro JB, García Martín LO, et al.: The efficacy of 2-phenoxyethanol, metomidate, clove oil and MS-222 as anaesthetic agents in the Senegalese sole (Solea senegalensis Kaup 1858). Aquacultur. 2009; 288(1–2): 147–150. Publisher Full Text\n\nNowosad J, Targońska K, Chwaluczyk R, et al.: Effect of temperature on the effectiveness of artificial reproduction of dace [Cyprinidae (Leuciscus leuciscus (L.))] under laboratory and field conditions. J Therm Biol. 2014; 45: 62–68. PubMed Abstract | Publisher Full Text\n\nKrejszeff S, Katarzyna T, Daniel Z, et al.: Domestication affect spawning of the ide (Leuciscus idus)—preliminary study. Aquaculture. 2009; 295(1–2): 145–147. Publisher Full Text\n\nDabrowski K, Ciereszko A, Ramseyer L, et al.: Effects of hormonal treatment on induced spermiation and ovulation in the yellow perch (Perca flavescens). Aquaculture. 1994; 120(1–2): 171–180. Publisher Full Text\n\nKobayashi T, Fushiki S, Ueno K: Improvement of sperm motility of sex-reversed male rainbow trout, Oncorhynchus mykiss, by incubation in high-pH artificial seminal plasma. Environ Biol Fish. 2004; 69(1–4): 419–425. Publisher Full Text\n\nRurangwa E, Kime DE, Ollevier F, et al.: The measurement of sperm motility and factors affecting sperm quality in cultured fish. Aquaculture. 2004; 234(1–4): 1–28. Publisher Full Text\n\nCabrita E, Mảrtinez-Pảramo S, Gavaia PJ, et al.: Factors enhancing fish sperm quality and emerging tools for sperm analysis. Aquaculture. 2014; 432: 389–401. Publisher Full Text\n\nMylonas CC, Duncans NJ, Asturiano JF: Hormonal manipulations for the enhancement of sperm production in cultured fish and evaluation of sperm quality. Aquaculture. 2017; 472: 21–44. Publisher Full Text\n\nRice EW, Baird RB, Eaton AD, et al.: Standard methods for the examination of water and wastewater, 22nd ed. American Public Health Association, American Water Works Association, Water Environment Federation. 2012. Reference Source\n\nSularto RRS, Dewi PS, Khasani I: The influence of 17α-methyltestosteron implantation on gonada maturation and fertility of male green catfish (Mystus nemurus). J Ris Akuakultur. 2010; 1(5): 53–57. Reference Source\n\nAryani N, Suharman I: Effects of 17ß-estradiol on the reproduction of green catfish (Hemibagrus nemurus, Bagridae). J Fish Aquacult. 2014; 5(1): 163–166. Reference Source\n\nAryani N, Suharman I: Effect of dietary protein level on the reproductive performance of female of green catfish (Hemibagrus nemurus Bagridae). Aquacult Res Dev. 2015; 6: 11. Publisher Full Text\n\nSuhenda N, Samsudin R, Kristanto AH: The role of dietary lipid level in embryo development, hatching rate and survival rate of green catfish (Mystus nemurus) larvae. J Ris Akuakultur. 2009; 2(4): 201–211. Reference Source\n\nLahnsteiner F, Patzner RA: Rainbow trout egg quality determination by the relative weight increase during hardening: a practical standardization. J Appl Ichthyol. 2002; 18: 24–26. Publisher Full Text\n\nKanyilaz M: Reproductive performance of a newly described Salmonid fish, Alakir Trout (Salmo Kottelati), a candidate species for aquaculture. Pak J Zool. 2016; 48(1): 83–89. Reference Source\n\nBailung B, Biswas SP: Successful induced breeding of a Bagrid Catfish, Mystus dibrugarensis in captive condition. J Aquacult Res Dev. 2014; 5: 7. Publisher Full Text\n\nAdebiyi FA, Siraj SS, Harmin SA, et al.: Induced spawning of a river catfish Hemibagrus nemurus (Valenciennes, 1840). Pertanika J Trop Agric Sci. 2013; 36(1): 71–78. Reference Source\n\nGarcía Ayala A, Villaplana M, García Hernández MP, et al.: FSH-, LH-, and TSH-expressing cells during development of Sparus aurata L. (Teleostei). An immunocytochemical study. Gen Comp Endocrinol. 2003; 134(1): 72–79. PubMed Abstract | Publisher Full Text\n\nAizen J, Kobayashi M, Selicharova I, et al.: Steroidogenic response of carp ovaries to piscine FSH and LH depends on the reproductive phase. Gen Comp Endocrinol. 2012; 178(1): 28–36. PubMed Abstract | Publisher Full Text\n\nNyuji M, Kazeto Y, Izumida D, et al.: Greater amberjack FSH, LH, and their receptors: Plasma and mRNA profiles during ovarian development. Gen Comp Endocrinol. 2016; 225: 224–234. PubMed Abstract | Publisher Full Text\n\nLubzens E, Young G, Bobe J, et al.: Oogenesis in teleosts: how eggs are formed. Gen Comp Endocrinol. 2010; 165(3): 367–389. PubMed Abstract | Publisher Full Text\n\nLubzens E, Bobe J, Young G, et al.: Maternal investment in fish oocytes and eggs: The molecular cargo and its contributions to fertility and early development. Aquaculture. 2017; 472: 107–143. Publisher Full Text\n\nGrande M, Andersen S: Effect of two temperature regimes from a deep and a surface water release on early development of salmonids. Regulated Rivers: Research & Management. 1990; 5(4): 355–360. Publisher Full Text\n\nWebb MA, Doroshov SI: Importance of environmental endocrinology in fisheries management and aquaculture of sturgeons. Gen Comp Endocrinol. 2011; 170(2): 313–321. PubMed Abstract | Publisher Full Text\n\nKime DE, Nash JP: Gamete viability as an indicator of reproductive endocrine disruption in fish. Sci Total Environ. 1999; 233(1–3): 123–129. Publisher Full Text\n\nChong ASC, Ishak SD, Osman Z, et al.: Effect of dietary protein level on the reproductive performance of female swordtails Xiphophorus helleri (Poeciliidae). Aquaculture. 2004; 234(1–4): 381–392. Publisher Full Text\n\nColdebella IJ, Neto JR, Mallmann CA, et al.: The effects of different protein levels in the diet on reproductive indexes of Rhamdia quelen females. Aquaculture. 2011; 312(1–4): 137–144. Publisher Full Text\n\nSink TD, Lochmann RT, Pohlenz C, et al.: Effects of dietary protein source and protein–lipid source interaction on channel catfish (Ictalurus punctatus) egg biochemical composition, egg production and quality, and fry hatching percentage and performance. Aquaculture. 2010; 298(3–4): 251–259. Publisher Full Text\n\nBromage N, Jones J, Randall C, et al.: Broodstock management, fecundity, egg quality and the timing of egg production in the rainbow trout (Oncorhynchus mykiss). Aquaculture. 1992; 100(1–3): 141–166. Publisher Full Text\n\nGbemisola OB, Adebayo OT: Sperm quality and reproductive performance of male Clarias gariepinus induced with synthetic hormones (Ovatide and Ovaprim). International Journal of Fisheries and Aquaculture. 2014; 6(1): 9–15. Publisher Full Text\n\nJaspers EJ, Avault JE Jr, Roussel JD: Testicular and Spermatozoal Characteristics of Channel Catfish, Ictalurus punctatus, outside the Spawning Season. Trans Am Fish Soc. 1978; 107(2): 309–315. Publisher Full Text\n\nEffer B, Figueroa E, Augsburger A, et al.: Sperm biology of Merluccius australis: Sperm structure, semen characteristics and effects of pH, temperature and osmolality on sperm motility. Aquaculture. 2013; 408–409: 147–151. Publisher Full Text\n\nIzquierdo MS, Fernandez-Palacios H, Tacon AGJ: Effect of broodstock nutrition on reproductive performance of fish. Aquaculture. 2001; 197(1–4): 25–42. Publisher Full Text\n\nKhodadadi M, Arab A, Jaferian A: A Preliminary study on sperm morphology, motility and composition of seminal plasma of Shirbot, Barbus grypus. Turk J Fish Aquat Sci. 2016; 16(4): 947–951. Publisher Full Text\n\nTaati MM, Mehrad B, Shabani A, et al.: Correlation between chemical composition of seminal plasma and sperm motility characteristics of Prussian carp (Carassius gibelio). AACL Bioflux. 2010; 3(3): 233–237. Reference Source\n\nButts IA, Litvak MK, Tripple EA: Seasonal variations in seminal plasma and sperm characteristics of wild-caught and cultivated Atlantic cod, Gadus morhua. Theriogenology. 2010; 73(7): 873–885. PubMed Abstract | Publisher Full Text\n\nDadras H, Sampels S, Golpuor A, et al.: Analysis of common carp Cyprinus carpio sperm motility and lipid composition using different in vitro temperatures. Anim Reprod Sci. 2017; 180: 37–43. PubMed Abstract | Publisher Full Text\n\nZilli L, Schiavone R, Vilella S: Role of protein phosphorylation/dephosphorylation in fish sperm motility activation: State of the art and perspectives. Aquaculture. 2017; 472: 73–80. Publisher Full Text\n\nBarman AS, Kumar P, Mariahabib, et al.: Role of nitric oxide in motility and fertilizing ability of sperm of Heteropneustes fossilis (Bloch.). Anim Reprod Sci. 2013; 137(1–2): 119–127. PubMed Abstract | Publisher Full Text\n\nAryani N, Suharman I, Syandri H: Dataset 1 in: Reproductive performance of asian catfish (Hemibagrus wyckii Bagridae), a candidate species for aquaculture. F1000Research. 2018. Data Source"
}
|
[
{
"id": "34564",
"date": "11 Jun 2018",
"name": "Zainal Abidin Muchlisin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle:\nI think the title should be revised as \"Reproductive biology and breeding of the Asian catfish (Hemibagrus wyckii) after domestication process.\n\nIn the title: (Hemibagris wyckii Bagridae) is missed understanding, the correct written should be Hemibagrus wyckii: Bagridae), because Bagridae is a Family level, or you can write in as (Hemigarus wyckii Bleeker, 1858)\n\nAbstract:\nBackground: ..... I think \"most\" should be deleted because sound so emotional or hyperbolic, just say \".... is one of the important economic fish species....\"\nMethods: Please mention where the brood fish is come from?\nConclusion: The conclusions is out of the context. You have to make conclusion just based on the data (findings), forget for candidate species for aquaculture, because, there was no detail discussion about this issue, so just focus on your present findings.\nKeywords: I suggest don't use similar words which already exist in the title.\nIntroduction:\nI think the state of the arts about the asian catfish is still shallow or unclear, so please provide more information about the previous studies on this species related to bio ecology, feeding, aquaculture etc.\nPlease add some information what is the advantages of the species compared to other freshwater species?\nMethods:\nPlease cite these references to enhance your methods:\nMuchlisin et al., 2010 1 Muchlisin et al., 2011 2\n\nResults:\nThe results have been explained in several short paragraphs, please extend the paragraph by combining paragraph 1, 2 in one paragraph and paragraph 3 and 4 in one paragraph.\nDiscussion:\nThere was no comprehensive discussion to justify why this species is potential for aquaculture. The justification is not only based on biological aspects but also on economic consideration (see Muchlisin, 20133).\nUnfortunately, there was no economic evaluation performed.Therefore, I suggest to focus only on the finding as already presented.\nConclusion:\n\nThe conclusion has to be revised. Conclusion is based on the finding, not assumption or interpretation.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "37278",
"date": "22 Aug 2018",
"name": "Rudy Agung Nugroho",
"expertise": [
"Reviewer Expertise Fish physiology",
"Nutrition"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle\nThe title should be revised: delete “a candidate species for aquaculture”. It needs more parameters (Growth, feed aspect, etc.) to convey that this species can be a candidate for aquaculture.\nAbstract In methods: the author stated that “Egg size (50 eggs for each fish) was measured using an Olympus microscope (CX40)”. This statement should be revised, since the microscope CX40 itself cannot be used to measure the egg size. It must be a micrometer within a microscope that has been used to measure the egg size, or the author has another technique to measure the egg size. This method should be also clearly stated in the methods section.\nIntroduction The introduction should be extended by providing more information about Asian catfish and their biological, physiological, and ecological aspects.\nMethods Ethical consideration: Please revise, it is seems there is a typo:\n\"[...]the Government of the Republic of Indonesia does not require licences to be obtain to capture and rear this species, hense no licences are applicable to this study.\" This sentence should be rewritten: \"[...] the Government of the Republic of Indonesia does not require licenses to obtain, capture and rear this species. Hence, no licenses are applicable to this study.\"\n\"In the end of the experiment the H. wyckii still in good condition until return back to the pond.\" The word pond should be changed to river, because the fish is returned back to the original (River not pond).\nRearing and selection of breeders: \"Then, the broodstock of H. wyckii had been adapted and grown to maturation under the farm conditions”. The word grown should be changed to reared, because this experiment is not about growth.\nWater quality: “Alkalinity and hardness levels of the water were measured in each replicate according to standard procedures”. It is not clear what “each replicate” is referring to. Does it mean that the measurement was done in a few replications? Why only alkalinity and hardness?\nResults Results are well presented.\nDiscussion The discussion can be extended by explaining the relationship between the results of water quality and the biological reproduction aspect, since the purpose of this study is under culture condition.\nConclusion\nNeeds slight revision based on all aspects of the findings.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-683
|
https://f1000research.com/articles/7-1373/v1
|
31 Aug 18
|
{
"type": "Case Report",
"title": "Case Report: Post obstructive pulmonary edema (POPE) Type II following elective adenotonsillectomy requiring novel use of high frequency oscillatory ventilation (HFOV)",
"authors": [
"Joanna J. Moser",
"Meghan O'Connell",
"Debbie L. McAllister",
"Joanna J. Moser",
"Meghan O'Connell"
],
"abstract": "This case report describes a previously healthy 14 year-old patient undergoing elective outpatient adenotonsillectomy that was complicated by acute postoperative pulmonary edema requiring 12 hours of high frequency oscillatory ventilation (HFOV) support. We describe the clinical findings that led us to this rare diagnosis and management of post obstructive pulmonary edema (POPE) Type II, a rare but recognized complication following the surgical relief of an upper airway obstruction. This case is unique in that no previously published case report or review of POPE Type II has described the need for HFOV support.",
"keywords": [
"General Anesthesia",
"High Frequency Oscillation Ventilation",
"Pulmonary Edema",
"Sleep Apnea",
"Obstructive Tonsillitis"
],
"content": "Introduction\n\nThis case report describes a pediatric patient undergoing elective adenotonsillectomy complicated by acute postoperative pulmonary edema Type II (POPE II) requiring twelve hours of high frequency oscillatory ventilation (HFOV) support. We describe the case in detail including management decisions and the broad differential diagnosis that was considered for acute bilateral pulmonary edema and discuss the rationale for this case being diagnosed as a rare case of POPE Type II. This case is unique in that no previously published case report or review of POPE II has described the need for HFOV support. This case description was written only after informed consent was obtained from the patient’s parent/legal guardian.\n\n\nCase description\n\nA previously healthy 14 year-old Caucasian female patient weighing 44 kg underwent an elective outpatient adenotonsillectomy for an initial diagnosis of recurrent tonsillitis. General anesthesia was accomplished with a sevoflurane inhalational induction supplemented with intravenous propofol (2 mg kg-1), morphine (0.1 mg kg-1), dexamethasone (0.2 mg kg-1) and ondansetron (0.1 mg kg-1) after intravenous access was established. Direct laryngoscopy revealed a grade 1 Cormack Lehane view of her airway with moderate to large tonsils and an appropriately sized cuffed endotracheal tube was placed without difficulty. Anesthesia was maintained with 1.0 minimum alveolar concentration (MAC) of sevoflurane and she was ventilated with a tidal volume of 6 mL kg-1, positive end-expiratory pressure (PEEP) of 4 cmH20 and fraction of inspired oxygen concentration (FiO2) of 0.3. Her preoperative hemoglobin was 136 g L-1 with a hematocrit of 0.4 L L-1. The surgery was complicated by a brisk arterial bleed with an estimated intraoperative blood loss of 600 mL. The patient was resuscitated with 500 mL Pentaspan and 1000 mL Lactated Ringers intraoperatively (for a total of 34 mL kg-1) and she remained hemodynamically stable throughout the surgery. At the conclusion of the surgery, the patient was extubated fully awake with an oxygen saturation (SpO2) of 99% and transferred uneventfully fully monitored on 6 L min-1 blow-by oxygen to the post anesthetic care unit (PACU) where there was one-to-one nursing care.\n\nOn initial assessment in the PACU, she was alert and oriented and talking in full sentences on arrival with an SpO2 of 95% on 10 L min-1 supplemental oxygen by facemask. At this time, bright red blood was suctioned from her oropharynx. Over the course of the next 45 minutes she began spitting up pink frothy sputum and her SpO2 could not be kept above 92% despite supplemental oxygen at 15 L min-1. Throughout her PACU stay, a further 250 mL of blood loss was recorded by nursing. The patient was treated with a total of 4 mg (0.09 mg kg-1) of intravenous morphine for throat pain. During this 45 minute period, the patient was talking to the PACU staff, and there was no documented observation of the patient obstructing their airway. Fifty minutes after arrival in the PACU, the Staff Anaesthesiologist of record was notified of the above course of events in the PACU. A clinical examination revealed bilateral crackles and a lethargic patient requiring continuous positive airway pressure (CPAP) to maintain an SpO2 of 92%. A prompt arterial blood gas (ABG) was obtained, demonstrating a respiratory acidosis (pH 7.24/pCO2 55 mmHg/pO2 101 mmHg, calculated bicarbonate of 24 mmol L-1 and base excess of -4mmol L-1) with a hemoglobin of 79 g L-1. Chest x-ray (CXR) revealed bilateral pulmonary edema (Figure 1a).\n\nOnset of pulmonary edema at hour 0 (panel a) and progress at hour 10 (panel c), hour 17 (panel d) and resolution of post obstructive pulmonary edema at hour 37 (panel e). Intraoperative bronchoscopy at hour 1.5 showing pulmonary edema (panel b).\n\nThe patient was taken emergently to the operating room where she had an uneventful rapid sequence re-intubation and surgical hemostasis was confirmed. An intraoperative bronchoscopy demonstrated pink frothy secretions consistent with pulmonary edema distal to the endotracheal tube, with no evidence of aspiration or frank blood in the airways (Figure 1b). The distal airways were patent and otherwise normal. At this time she had persistent tachycardia up to 130 beats per minute and had intermittent hypotension with systolic blood pressure less than 100 mmHg. Hemodynamics stabilized with one unit of packed red blood cells (273 mL). Post transfusion hemoglobin was 101 g L-1 and furosemide (10 mg; 0.23 mg kg-1) was given to compensate in the event of volume overload or heart failure. The ABG continued to show a respiratory acidosis (pH 7.20/pCO2 65 mmHg/pO2 75 mmHg, calculated bicarbonate of 25 mmol L-1 and base excess of -3 mmol L-1) with an SpO2 of 95%, positive end-expiratory pressure (PEEP) 9 cmH20, FiO2 of 1.0 and end tidal CO2 (ETCO2) of 42 mmHg.\n\nThe patient was transferred from the operating room directly to the pediatric intensive care unit (PICU). A bedside transthoracic echocardiogram demonstrated normal left ventricle and right ventricle function, with an estimated left ventricular ejection fraction (LVEF) of 56% and an estimated right ventricular systolic pressure (RVSP) of 30–40 mmHg. At that point, she required manual ventilation to enable adequate oxygenation of non-compliant lungs. Her PaO2/FiO2 (P/F) ratio ranged from 87 to 156 during the first two hours in the PICU on a conventional ventilator (Maquet Critical Care SERVO-I ventilator system) however, the amount of PEEP required to provide adequate oxygenation/ventilation exceeded 12 mmHg resulting in a peak inspiratory pressure (PIP) greater than 40 cmH20. Forty minutes after arrival in the PICU the patient was placed on a high frequency oscillatory ventilator (HFOV) with a mean airway pressure of 28 cmH2O, amplitude of 70 cmH2O, power of 5, and FiO2 of 1.0, SpO2 100%. She was administered a prophylactic dose of broad-spectrum antibiotics to treat any potential infectious cause to her presentation. Over the next 12 hours the HFOV ventilator settings were titrated down as serial CXRs showed improvement (Figure 1c and 1d). She was transitioned to a conventional ventilator 13 hours after her arrival in the PICU on postoperative day one. She was extubated to CPAP (8 cmH2O) 12 hours later and her CXR showed improvement of lung injury (Figure 1e). After a further 12 hours of gradual tapering of CPAP therapy, the patient was transferred to the ward with humidified 15 L min-1 blow-by oxygen on postoperative day two. Her ABG on 15 L min-1 blow-by oxygen was pH 7.34/pCO2 45 mmHg/pO2 91 mmHg, calculated bicarbonate of 24 mmol L-1 and base excess of -3 mmol L-1). On postoperative day three, she was discharged home without any respiratory issues and not requiring the use of oxygen therapy. Antibiotic therapy was not continued as her infectious work up was negative. While in the PICU, her parents were questioned in further detail about the patient’s medical history including a comprehensive review of systems given the postoperative course of events. It was at this time the parents revealed for the first time that the patient had an ongoing history of night-time snoring, features consistent with obstructive sleep apnea from enlarged tonsils and adenoids.\n\n\nDiscussion\n\nThis case report describes a patient who went into acute bilateral pulmonary edema. The differential diagnosis for this presentation is broad, and could include volume overload, acute myocardial dysfunction, acute respiratory distress syndrome (ARDS) secondary to aspiration or infection, transfusion related lung injury (TRALI), or post obstructive pulmonary edema (POPE) Type I or II. This discussion will outline the rationale for this case being diagnosed as a rare case of POPE Type II.\n\nVolume overload in this patient is unlikely, as she was otherwise healthy and intraoperative fluid resuscitation totalled 34 mg kg-1 (23 mL kg-1 crystalloid and 11 mL kg-1 colloid) for resuscitation of an estimated 13.6 mL kg-1 acute blood loss, with ongoing losses postoperatively (5.7 mL kg-1) for an estimated 28% total blood volume loss. Overall there was a negative fluid balance based on preoperative fasting status and calculated fluid deficit at the end of the first operation. The intraoperative bronchoscopy did not demonstrate any evidence of foreign material (blood or gastric contents) in the airways, lowering the likelihood of an aspiration event as a potential cause for her symptoms. Cardiac dysfunction was excluded as a cause for her respiratory failure with a normal transthoracic echocardiogram study that showed no regional wall motion abnormalities, normal heart valve function, and a normal LVEF. TRALI was excluded as a potential diagnosis given that the transfusion of packed red blood cells was initiated after she was recognized to be in pulmonary edema. Further, the diagnostic criteria of TRALI include new onset of pulmonary infiltrates within six hours after exposure to blood products. In addition, TRALI typically takes multiple days to resolve before clinical improvement1. Finally, the patient did not obstruct at any time during the induction of or emergence from anesthesia, nor did she obstruct her airway in the PACU. Therefore the likelihood of post obstructive pulmonary edema associated with an acute upper airway obstruction causing negative pressure pulmonary edema (POPE type I) was an unlikely diagnosis in this patient given that an obstructive event was not witnessed in a fully observed and monitored patient environment.\n\nPulmonary edema is a potentially life threatening complication of acute airway obstruction which develops rapidly and often without warning2. POPE Type II is a rare complication that follows surgical relief of a chronic upper airway obstruction, which can occur with hypertrophied adenoids and tonsils3. It has been recognized for decades that upper airway obstruction events can lead to pulmonary edema and right heart failure4–6. Fluid balance in the lungs is determined by pleural pressures, cardiorespiratory interactions, hydrostatic and oncotic pressure and pulmonary capillary permeability. Over time, breathing against resistance (the Muller manoeuvre) causes wide swings in intrathoracic pressure, which combined with neurohumoral effects of hypoventilation and hypercarbia can predispose the alveoli to edema. The effects of alveolar hypoventilation can lead to increases in pulmonary artery pressure, which in turn can lead to right heart dysfunction and failure. These effects can be reversible with the removal of the airway obstruction4.\n\nWith the sudden relief of a chronic upper airway obstruction, such as through surgical removal of hypertrophied tonsils and adenoids or other lesions, the intrinsic PEEP generated by these lesions is lost and the balance between these factors is upset causing the rare presentation of POPE Type II7,8. The alveoli can flood with interstitial fluid causing acute pulmonary edema. What makes our case unique is that there have been no other reports to our knowledge in the pediatric or adult literature that describe the need for HFOV in order to ventilate patients and allow for the resolution of pulmonary edema associated with POPE Type II. Certainly, in a systematic review and meta-analysis of adult ARDS patients, pooled results suggest that HFOV improves oxygenation, reduces the risk of treatment failure (such as refractory hypoxemia, hypercapnea, hypotension, or barotrauma) and reduces 30-day mortality compared with conventional mechanical ventilation9. Documented cases of POPE Type II have typically resolved after brief supportive care with supplemental oxygen, CPAP, or brief periods of conventional ventilation with PEEP 4 – 8 cmH202–5,10. In this patient, the level of PEEP and overall airway pressure required to maintain oxygenation and treat the respiratory acidosis was higher than the conventional ventilators available at our institution (Maquet Critical Care SERVO-I ventilator system) could provide. This case highlights that prompt, supportive management, including HFOV, needs to be initiated immediately and that POPE Type II should be considered in the management of acute pulmonary edema post adenotonsillectomy.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.\n\n\nEthics\n\nWritten informed consent for publication of their clinical details and/or clinical images was obtained from the parent/guardian of the patient.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nSanchez R, Toy P: Transfusion related acute lung injury: a pediatric perspective. Pediatr Blood Cancer. 2005; 45(3): 248–255. PubMed Abstract | Publisher Full Text\n\nMehta VM, Har-El G, Goldstein NA: Postobstructive pulmonary edema after laryngospasm in the otolaryngology patient. Laryngoscope. 2006; 116(9): 1693–1696. PubMed Abstract | Publisher Full Text\n\nFeinberg AN, Shabino CL: Acute pulmonary edema complicating tonsillectomy and adenoidectomy. Pediatrics. 1985; 75(1): 112–114. PubMed Abstract\n\nSofer S, Baer R, Gussarsky Y, et al.: Pulmonary edema secondary to chronic upper airway obstruction. Hemodynamic study in a child. Intensive Care Med. 1984; 10(6): 317–319. PubMed Abstract | Publisher Full Text\n\nSofer S, Weinhouse E, Tal A, et al.: Cor pulmonale due to adenoidal or tonsillar hypertrophy or both in children. Noninvasive diagnosis and follow-up. Chest. 1988; 93(1): 119–122. PubMed Abstract | Publisher Full Text\n\nLuke MJ, Mehrizi A, Folger GM Jr, et al.: Chronic nasopharyngeal obstruction as a cause of cardiomegaly, cor pulmonale, and pulmonary edema. Pediatrics. 1966; 37(5): 762–768. PubMed Abstract\n\nMiro AM, Shivaram U, Finch PJ: Noncardiogenic pulmonary edema following laser therapy of a tracheal neoplasm. Chest. 1989; 96(6): 1430–1431. PubMed Abstract | Publisher Full Text\n\nUdeshi A, Cantie SM, Pierre E: Postobstructive pulmonary edema. J Crit Care. 2010; 25(3): 508.e1–5. PubMed Abstract | Publisher Full Text\n\nSud S, Sud M, Friedrich JO, et al.: High frequency oscillation in patients with acute lung injury and acute respiratory distress syndrome (ARDS): systematic review and meta-analysis. BMJ. 2010; 340: c2327. PubMed Abstract | Publisher Full Text\n\nAustin AL, Kon A, Matteucci MJ: Respiratory Failure in a Child Due to Type 2 Postobstructive Pulmonary Edema. Pediatr Emerg Care. 2016; 32(1): 23–24. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "37820",
"date": "03 Sep 2018",
"name": "Lynn Martin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a case report of a 14 yo female developing severe POPE type II following elective tonsillectomy. The severity of the edema required treatment with HFOV. The case report is well written and referenced. I have few questions or suggestions for the authors:\nI believe the manuscript would be enhanced if more details regarding what screening questions for OSA were or were not asked in the pre-operative period. Why are pre-operative hemoglobin and hematocrit obtained? Why did it take 45 minutes to notify the staff anesthesiologist of the ongoing blood loss of 250 mL? Why was morphine given to a patient in respiratory distress? The discussion is excellent and complete. My only suggestion would be to move the second paragraph to the end of the discussion. Thus you are highlighting the unique features of this case report first and finishing be ruling out other potential causes.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "41573",
"date": "07 Jan 2019",
"name": "Andrea Gentili",
"expertise": [
"Reviewer Expertise My area of expertise is pediatric anesthesia and intensive care."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe a case of an acute postoperative pulmonary edema Type II (POPE II) that required high frequency oscillatory ventilation (HFOV) support in a 14-year-old girl undergoing elective adenotonsillectomy. The manuscript is interesting, well presented and documented. The discussion is well articulated and appropriate. The clinical case demonstrates how even an adenotonsillectomy surgery can lead to major complications such as intraoperative hemorrhage and postoperative pulmonary edema.\nMy considerations and suggestions for the authors are:\nThe authors could specify which anamnestic elements they used in the preoperative evaluation of the girl's OSA condition. Why was the general anesthesia accomplished with a sevoflurane inhalational induction supplemented with intravenous propofol and morphine, when at the patient's age an exclusively intravenous induction could be performed? Was this anesthetic choice made on suspicion of a high airway obstruction from very enlarged tonsils? Did the transition from conventional ventilation with a PIP greater than 40 cmH20 to HFOV with a MAP of 28 cmH2O involve any cardiovascular changes? During HFOV ventilation, for how many hours was the MAP set at 28 cmH2O?\n\nBased on my level of expertise in the subject matter, I believe the manuscript can be indexed.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1373
|
https://f1000research.com/articles/7-1325/v2
|
31 Aug 18
|
{
"type": "Opinion Article",
"title": "Central sensitization and pain hypersensitivity: Some critical considerations.",
"authors": [
"Emanuel N. van den Broeke"
],
"abstract": "Since its discovery, central sensitization has gained enormous popularity. It is widely used to explain pain hypersensitivity in a wide range of clinical pain conditions. However, at present there is no general consensus on the definition of central sensitization. Moreover, the use of the term central sensitization in the clinical domain has been criticized. The aim of this paper is to foster the discussion on the definition of central sensitization and its use.",
"keywords": [
"Central sensitization",
"definition",
"pain",
"nociception",
"secondary hyperalgesia."
],
"content": "Introduction\n\n“Many subjects, but by no means all, become conscious of soreness of skin surrounding a small area of injury”\n\nWith these words Sir Thomas Lewis starts one of the chapters in his book “Pain”1 (p. 68). The sentence refers to what is now known as “secondary hyperalgesia”, which has intrigued pain neuroscientists for almost a century. Lewis was probably the first that systematically studied this phenomenon. He hypothesized that secondary hyperalgesia was due to a peripheral mechanism (“nocifensor axon reflex”). Impulses generated by nerves at the site of injury travel antidromically via branches to their endings, where there is a release of substances that excite neighboring nerves1.\n\nHowever, by performing a series of psychophysical experiments Hardy et al.2 came to another conclusion. Contrary to Lewis who suggested that secondary hyperalgesia resulted from a spreading of excitation in the skin, Hardy et al. hypothesized that secondary hyperalgesia resulted from a “central excitatory state”2 (p. 139).\n\nSimilar to the idea of Lewis of a network of interconnected nerves, Hardy et al. hypothesized that in the spinal cord there is a pool of neurons consisting of primary and secondary neurons that make synaptic connections to a network of “internuncial” neurons. The function of these internuncial neurons would be to establish and maintain an excitatory state within the neuron pool. In the case of tissue injury, the barrage of noxious impulses originating from the site of injury enters the spinal cord where they excite the network of internuncial neurons, leading to an excitation of connected neurons2.\n\n“If now the skin is pricked in the area of secondary hyperalgesia, a burst of impulses passes into the spinal cord and when reaching the tertiary neuron it is facilitated giving rise to more intense sensation than usual”2 (p.135).\n\nWoolf3 was the first that provided evidence for such a “central excitatory state”. He showed that in rats the motor reflex threshold elicited by mechanical punctate stimuli delivered adjacent to a burn injury was reduced for many hours3. In subsequent studies Woolf and co-workers further showed that the induction of this “central excitatory state” does not require tissue injury, but that it can also be induced after electrical stimulation of C-fiber nociceptors4. Based on these findings, Woolf and co-workers5 introduced the term “central sensitization” (CS):\n\n“This is the phenomenon of aberrant convergence; the generation of pain by activating sensory fibres that normally only produce innocuous sensations i.e. the large myelinated low threshold afferents. Aberrant convergence arises as a consequence of changes induced within the spinal cord by activity in unmyelinated afferent fibres – a process called central sensitization” (p. 256).\n\nActually, Woolf et al. describe here what is now called allodynia: “pain in response to a non-nociceptive stimulus”6.\n\nSince 2008, the task force for taxonomy of the International Association for the Study of Pain (IASP)6 proposes the following definition of CS:\n\n“Increased responsiveness of nociceptive neurons in the central nervous system to their normal or subthreshold afferent input”.\n\nThe task force for taxonomy6 defines a nociceptive neuron as:\n\n“A peripheral or central neuron of the somatosensory system that is able to encode a noxious stimulus”.\n\nBut what is meant by encoding? And which neurons can be considered part of the somatosensory system and which not?\n\nNowadays the term CS is very popular and is associated with many more conditions than secondary hyperalgesia. The concept of CS is used by both basic scientists and clinicians; however its use in the clinical domain has been criticized7. The aim of this paper is to foster the discussion on the definition of CS and its use.\n\n\nIs CS defined too broadly?\n\nIf a definition becomes too broad it will be used non-selectively and it will lose its value. On the other hand, if a definition becomes too specific it may miss important phenomena. The IASP proposal for the definition of CS clearly describes a phenomenon. However, in the literature CS is often presented as mechanism, for example, Vardeh et al.8 (p. T56). More importantly, the definition does not mention a functional meaning. If the purpose of the term CS was and/or is to explain pain hypersensitivity then this should be included in the definition. Furthermore, the term “nociceptive neurons” may then not be specific enough. As pointed out by Sandkühler9:\n\n“Nociceptive neurons comprise a heterogeneous cell group with putatively many different and sometimes opposing functions, including a large group of inhibitory interneurons. Thus enhanced responsiveness of some of these neurons could contribute to hyperalgesia. On the other hand, enhanced responsiveness of inhibitory nociceptive neurons may well lead to stronger feedback inhibition and analgesia, while still other neurons may not contribute to the experiences of pain but rather to altered motor or vegetative responses to a noxious stimulus” (p. 708).\n\nWoolf10 proposed an alternative definition of CS which links CS directly to pain hypersensitivity:\n\n“An amplification of neural signaling within the CNS that elicits pain hypersensitivity” (p. S5).\n\nHowever, establishing a causal relationship between CS and pain hypersensitivity is particularly difficult. Indeed, it is possible to measure the activity of nociceptive neurons in the CNS in animal preparations but obviously, we cannot measure pain perception. Conversely, we can measure pain perception in humans but we cannot directly measure the activity of nociceptive neurons11.\n\nIn addition, because we cannot record directly from nociceptive neurons in humans and we have to rely on changes in pain perception or thresholds, the risk is to end up in a circulus in probando12. For example, patient X shows CS because she/he suffers from pain hypersensitivity and pain hypersensitivity is a manifestation of CS. The described evidence for the conclusion is not different from the conclusion itself.\n\nTaken together, depending on the purpose of the term CS, it may be necessary to reconsider the IASP definition.\n\n\nIs secondary hyperalgesia the only example of CS?\n\nIn a related note, the task force for taxonomy of the IASP6 further states about the term sensitization:\n\n“This is a neurophysiological term that can only be applied when both input and output of the neural system under study is known, e.g. by controlling the stimulus and measuring the neural event”.\n\nAccording to Treede13 the phenomenon of secondary hyperalgesia induced by intradermal capsaicin injection\n\n“…is currently the only example where both input and output of spinal neurons have been documented in the same model and, hence, the IASP definition of CS is fulfilled” (p. 1200).\n\nThis would imply that, for the moment, the term CS, as provided by the IASP, may only be used for this particular condition.\n\nWhen injected into the skin capsaicin activates TRPV1 expressing nociceptors and elicits a burning sensation14. A consequence is the development of increased pinprick sensitivity in a large part of the skin surrounding the injection site14, a phenomenon reminiscent of secondary hyperalgesia after tissue injury. By recording the activity of nociceptive neurons in the primate spinal cord before and after capsaicin injection, Simone et al.15 showed that both wide-dynamic-range (WDR) and high-threshold (HT) neurons respond more strongly to pinprick stimuli when these stimuli were delivered after the injection to the skin surrounding the injection site (output). The same group also recorded the activity of peripheral A-fiber and C-fiber nociceptors in this area (input) but their activity was unchanged16. Because these sensitized spinal neurons project via the spinothalamic pathway to the brain, they may contribute to the increase in pinprick perception in humans.\n\nHowever, it remains puzzling why secondary hyperalgesia is characterized by an increase in the perception for mechanical pinprick stimuli, but not heat stimuli17–19. Should a sensitization of WDR neurons, which are polymodal, not also lead to an increase in perception for other modalities like touch or heat?\n\n\nNociceptive input (and increases thereof) does not necessarily elicits pain\n\nAn important function of nociception in normal conditions is to warn for tissue damage. Therefore it would make sense that nociceptors are activated before there is any tissue damage. Compatible with this idea are the observations that nociceptors in humans are activated by stimulus intensities that are not perceived as painful20.\n\nIndeed, in normal conditions (i.e. without sensitization) mechanical pinprick stimuli typically elicit a sharp pricking sensation, which is not perceived as painful in the majority of people. However, studies using microneurography have clearly demonstrated that such mechanical pinprick probes activate mechanosensitive nociceptors in the skin21–23. Moreover, a study comparing the perceptual pain thresholds in human volunteers with the thresholds for nociceptors in animals using the same pinprick probes, suggests that the non-painful sharp pricking sensation is mediated by mechanosensitive nociceptors24.\n\nPinprick stimuli delivered after sensitization to the skin surrounding the site at which sensitization was induced clearly elicit an increase in intensity of perception but this is not always perceived as painful. Importantly, the perception elicited by tactile stimuli is not increased25 (and unpublished observations), indicating that the increase in the pricking sensation elicited by pinprick stimuli after sensitization is mediated by mechano-sensitive nociceptors instead of low-threshold mechanoreceptors.\n\nLikewise, we recently showed that heat perception elicited by tiny laser stimuli selectively activating C-fiber nociceptors in the skin was greater when these stimuli were delivered to the area of secondary hyperalgesia26. However, despite the fact that our heat stimuli selectively activated C-fiber nociceptors, the perception elicited by these stimuli was not qualified as painful neither at baseline (before inducing sensitization) nor after the induction of sensitization. Importantly, the greater heat sensitivity elicited by these stimuli is probably a perceptual correlate of CS. Indeed, Kronschläger et al.27 recently showed in rats that strong peripheral nociceptive input activates glial cells (which include microglial and astrocytes) leading to the release of cytokines and chemokines which excites remote C-fiber synapses.\n\nTaken together, both examples (increased pinprick sensitivity and greater heat sensitivity) suggest that CS does not necessarily result in pain hypersensitivity. This would plead for a mechanism-based approach of CS rather than focusing on changes in pain perception only. Indeed, according to the definitions provided by the IASP6 one cannot label the increases in pinprick and heat perception as “hyperalgesia” because it is not an increase in pain sensitivity. They cannot be labeled as “allodynia” either, because the stimulus is a nociceptive one and is not always perceived as painful after sensitization.",
"appendix": "Grant information\n\nENvdB is supported by the Fonds de Recherche Clinique (FRC) of the Université catholique de Louvain, Brussels, Belgium, and the European Research Council \"starting\" grant (PROBING PAIN 336130).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe author would like to thank Diana Torta, Omer Van den Bergh, Leon Plaghki, and André Mouraux for the fruitful discussions.\n\n\nReferences\n\nLewis T: Nocifensor tenderness. In: Pain. The Macmillan Company, New-York, 1942; 68–83. Publisher Full Text\n\nHardy JD, Wolff HG, Goodell H: Experimental evidence on the nature of cutaneous hyperalgesia. J Clin Invest. 1950; 29(1): 115–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoolf CJ: Evidence for a central component of post-injury pain hypersensitivity. Nature. 1983; 306(5944): 686–688. PubMed Abstract | Publisher Full Text\n\nWoolf CJ, Wall PD: Relative effectiveness of C primary afferent fibers of different origins in evoking a prolonged facilitation of the flexor reflex in the rat. J Neurosci. 1986; 6(5): 1433–1442. PubMed Abstract | Publisher Full Text\n\nWoolf CJ, Thompson SW, King AE: Prolonged primary afferent induced alterations in dorsal horn neurones, an intracellular analysis in vivo and in vitro. J Physiol (Paris). 1988; 83(3): 255–266. PubMed Abstract\n\nLoeser JD, Treede RD: The Kyoto protocol of IASP Basic Pain Terminology. Pain. 2008; 137(3): 473–477. PubMed Abstract | Publisher Full Text\n\nHansson P: Translational aspects of central sensitization induced by primary afferent activity: what it is and what it is not. Pain. 2014; 155(10): 1932–1934. PubMed Abstract | Publisher Full Text\n\nVardeh D, Mannion RJ, Woolf CJ: Toward a Mechanism-Based Approach to Pain Diagnosis. J Pain. 2016; 17(9 Suppl): T50–69. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSandkühler J: Models and mechanisms of hyperalgesia and allodynia. Physiol Rev. 2009; 89(2): 707–758. PubMed Abstract | Publisher Full Text\n\nWoolf CJ: Central sensitization: implications for the diagnosis and treatment of pain. Pain. 2011; 152(3 Suppl): S2–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCervero F: Central sensitization and visceral hypersensitivity: Facts and fictions. Scand J Pain. 2014; 5(2): 49–50. PubMed Abstract | Publisher Full Text\n\nvan den Broeke EN, Torta DM, Van den Bergh O: Central sensitization: Explanation or phenomenon? Clin Psychol Sci. 2018; in press. Publisher Full Text\n\nTreede RD: Gain control mechanisms in the nociceptive system. Pain. 2016; 157(6): 1199–204. PubMed Abstract | Publisher Full Text\n\nLaMotte RH, Shain CN, Simone DA, et al.: Neurogenic hyperalgesia: psychophysical studies of underlying mechanisms. J Neurophysiol. 1991; 66(1): 190–211. PubMed Abstract | Publisher Full Text\n\nSimone DA, Sorkin LS, Oh U, et al.: Neurogenic hyperalgesia: central neural correlates in responses of spinothalamic tract neurons. J Neurophysiol. 1991; 66(1): 228–246. PubMed Abstract | Publisher Full Text\n\nBaumann TK, Simone DA, Shain CN, et al.: Neurogenic hyperalgesia: the search for the primary cutaneous afferent fibers that contribute to capsaicin-induced pain and hyperalgesia. J Neurophysiol. 1991; 66(1): 212–227. PubMed Abstract | Publisher Full Text\n\nAli Z, Meyer RA, Campbell JN: Secondary hyperalgesia to mechanical but not heat stimuli following a capsaicin injection in hairy skin. Pain. 1996; 68(2–3): 401–411. PubMed Abstract | Publisher Full Text\n\nRaja SN, Campbell JN, Meyer RA: Evidence for different mechanisms of primary and secondary hyperalgesia following heat injury to the glabrous skin. Brain. 1984; 107(Pt 4): 1179–1188. PubMed Abstract | Publisher Full Text\n\nvan den Broeke EN, Lenoir C, Mouraux A: Secondary hyperalgesia is mediated by heat-insensitive A-fibre nociceptors. J Physiol. 2016; 594(22): 6767–6776. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan Hees J, Gybels JM: Pain related to single afferent C fibers from human skin. Brain Res. 1972; 48: 397–400. PubMed Abstract | Publisher Full Text\n\nGarell PC, McGillis SL, Greenspan JD: Mechanical response properties of nociceptors innervating feline hairy skin. J Neurophysiol. 1996; 75(3): 1177–1189. PubMed Abstract | Publisher Full Text\n\nSlugg RM, Campbell JN, Meyer RA: The population response of A- and C-fiber nociceptors in monkey encodes high-intensity mechanical stimuli. J Neurosci. 2004; 24(19): 4649–4656. PubMed Abstract | Publisher Full Text\n\nSlugg RM, Meyer RA, Campbell JN: Response of cutaneous A- and C-fiber nociceptors in the monkey to controlled-force stimuli. J Neurophysiol. 2000; 83(4): 2179–2191. PubMed Abstract | Publisher Full Text\n\nGreenspan JD, McGillis SL: Stimulus features relevant to the perception of sharpness and mechanically evoked cutaneous pain. Somatosens Mot Res. 1991; 8(2): 137–147. PubMed Abstract | Publisher Full Text\n\nvan den Broeke EN, Mouraux A: High-frequency electrical stimulation of the human skin induces heterotopical mechanical hyperalgesia, heat hyperalgesia, and enhanced responses to nonnociceptive vibrotactile input. J Neurophysiol. 2014; 111(8): 1564–1573. PubMed Abstract | Publisher Full Text\n\nLenoir C, Plaghki L, Mouraux A, et al.: Quickly-responding C-fibre nociceptors contribute to heat hypersensitivity in the area of secondary hyperalgesia. J Physiol. 2018; In press. PubMed Abstract | Publisher Full Text\n\nKronschläger MT, Drdla-Schutting R, Gassner M, et al.: Gliogenic LTP spreads widely in nociceptive pathways. Science. 2016; 354(6316): 1144–1148. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "38228",
"date": "18 Oct 2018",
"name": "Geert Crombez",
"expertise": [
"Reviewer Expertise Pain pscyhology",
"learning psychology",
"philosophy of causality and science",
"practice of science"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper by Emmanuel van den Broeke critically discusses definition and use of the terms central sensitisation and pain hypersensitivity. As stated by the author, the term central sensitisation has become increasingly popular, and seems to become over-used, if not mis-used. At times, it is important to reflect upon the origin of terms, and to track how meaning and use have changed over time. Evidently, we do not have to hold on the past, and definitions and use may change as science advances. The paper of van den Broeke is timely, and provides essential reading for many. It is an ideal paper for scholarly reflection and group discussion.\nIt nicely traced the origin in meaning, and the various changes in definition. It critically analyses interrelationships with other constructs, and potential disadvantages. Notwithstanding, it does not provide definite answers. Probably, that is not possible, but I would suggest that the authors reflect upon what should be the way forward. What do they recommend to readers and researchers.\nMost importantly, seems to be a precise use of the term, and to avoid confusion in meaning. Indeed, as pointed out central sensitisation can be used to describe a phenomenon or to describe a mechanism. This is confusing and may result in circular reasoning: central sensitisation explains central sensitisation; In that respect, I have learned to make a distinction between at least three ways of using scientific terms: (1) as a result, (2) as an explanation and (3) as a procedure. Central sensitisation as a result refers to the phenomenon, most often as the result of a specific procedure. Indeed, there are some experimental procedures that induce the phenomenon. Finally, an scientific endeavour is to provide explanations, often mechanistic explanations, for the phenomenon that results from particular experimental procedures. In times of confusion and overuse, it is useful to come back and reflect upon what exactly is meant by someone.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "38593",
"date": "23 Oct 2018",
"name": "Philipp Hüllemann",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nEmanuel N van den Broeke provides a very interesting overview on how the term ‘central sensitization’ (CS) was originally characterized, how the use of the term developed in the scientific field and how extensively it may now be overused in basic and clinical research. The author lists several “historic” and recent scientific examples, which shine light on the mechanistic origin of central sensitization. It soon becomes clear that there is no actual consensus on the definition of central sensitization and that scientific evidence is sparse as well as contradictory on some occasions. Newer studies show that the intensity of thermal and mechanical stimuli increases most probably due to central sensitization processes but that this increase of intensity is not necessarily perceived as painful. Therefore, non-painful aspects of central sensitization are lacking in the current definition of CS. The further, we need to think of a more specific definition, which may guide researchers and clinicians in the use of the term.\n\nI have two suggestions:\n\nIt might have been useful to add some sentences on peripheral sensitization and its possible role in driving, as well as maintaining central sensitization. A short conclusion/summary including the authors thoughts would also be helpful.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": []
}
] | 2
|
https://f1000research.com/articles/7-1325
|
https://f1000research.com/articles/7-606/v1
|
17 May 18
|
{
"type": "Research Article",
"title": "Absence of toll-like receptor 9 Pro99Leu polymorphism in cervical cancer",
"authors": [
"Alex Chauhan",
"Nilesh Pandey",
"Nitin Raithatha",
"Purvi Patel",
"Ajesh Desai",
"Neeraj Jain",
"Alex Chauhan",
"Nilesh Pandey",
"Nitin Raithatha",
"Purvi Patel",
"Ajesh Desai"
],
"abstract": "Background: Toll-like receptor 9 (TLR9) plays a key role in the elimination of viral pathogens by recognising their CpG DNA. Polymorphisms in the TLR9 gene may influence their recognition and subsequent elimination. Therefore, the present study was designed to elucidate the role of a rare unexplored TLR9 gene polymorphism C296T/ Pro99Leu (rs5743844) in cervical cancer susceptibility among Indian women. Methods: The genotyping of TLR9 Pro99Leu polymorphism in 110 cervical cancer patients and 141 healthy controls was performed by polymerase chain reaction–restriction fragment length polymorphism (PCR-RFLP). Results: The genotype frequency detected in both cervical cancer and control populations was 1.0 (CC), 0.0 (CT) and 0.0 (TT); while the allele frequency was found to be 1.0 (C) and 0.0 (T). Conclusions: The present study results demonstrate no involvement of TLR9 C296T/ Pro99Leu polymorphism in cervical cancer susceptibility and supports worldwide minor allele frequency (MAF) (0.0002) status of the same as no nucleotide variation was detected in any of the study participants.",
"keywords": [
"Cervical cancer",
"TLR9",
"Polymorphism",
"Genotypic frequency",
"Susceptibility"
],
"content": "Introduction\n\nCervical cancer is the fourth-most common cancer among women globally and second leading cause of cancer-related deaths in Indian women1. Although persistent infection of high-risk human papillomavirus (hrHPV) is considered as the chief causative agent of cervical cancer, variations in host genetic make-up do influence the risk of acquiring HPV infection, and susceptibility to cervical carcinogenesis2–4. In this context, variations in Toll-like receptors (TLRs), that play a crucial role in activating immune response by identifying pathogen-associated molecular patterns, have drawn significant attention, as single nucleotide polymorphisms (SNPs) in TLR genes have been shown to alter susceptibility to many infections and human diseases including cancer5–8.\n\nTen functional TLR genes are known in humans9, one of which, the TLR9 gene product, recognizes microbial DNA motifs5,10,11. Frequently analysed TLR9 SNPs G2848A and −1486 T/C have been suggested to alter cervical cancer susceptibility12–15, but no report is available elucidating the role of TLR9 Pro99Leu polymorphism in cancer. Although TLR9 Pro99Leu is a rare population SNP with a global minor allele frequency (MAF) of 0.0002 as reported in the single nucleotide polymorphism database (dbSNP), in-vitro analysis has revealed its significant role in DNA ligand hyporesponsiveness16. Considering the fact that cervical cancer is largely caused by hrHPV infection and TLR9 has the ability to respond to viral DNA, the present study was designed to elucidate the association of the TLR9 Pro99Leu polymorphism with cervical cancer.\n\n\nMethods\n\nBiopsies from 110 cervical cancer patients and cervical smears from 141 healthy volunteers were collected from Shree Krishna Hospital, Anand; Sir Sayajirao General Hospital, Vadodara; and GMERS Hospital, Ahmedabad, India. The samples were collected from 2012 to 2017. The cancer biopsies and healthy cervical smears were histopathologically and cytologically confirmed. The clinical staging of cervical cancer samples was done as per The International Federation of Gynecology and Obstetrics (FIGO) guidelines.\n\nDNA was isolated from cervical cancer biopsies and cervical smears by standard phenol-chloroform extraction method17. In the case of a low number of cervical cells, a spin-column based DNA isolation kit (Macherey-Nagel, Germany; Cat# 740952.50) was utilized as per manufacturer’s instructions. The quality and quantity of DNA was determined using ethidium bromide-stained 1% agarose gel on GelDoc system (BioRad, USA) as well as a NanoDrop 2000 (Thermofisher, USA). The TLR9 Pro99Leu polymorphism was detected using polymerase chain reaction–restriction fragment length polymorphism (PCR-RFLP) method as described by Kubarenko et al.16 Briefly, a 25µl PCR mix contained 0.1µM each of forward and reverse primer (Imperial Life Sciences, India), 0.1mM dNTP mix (Invitrogen, USA; Cat# 18427088), 2.5mM MgCl2 (Vivantis, USA; Cat# RB0204), 1 unit Taq DNA polymerase (Kapabiosystems, USA; Cat# KK1015) and 100 to 150ng genomic DNA. The PCR was run on an MJ Mini thermal cycler (BioRad, USA).\n\nUpon confirmation of 337 bp PCR product on 2% ethidium bromide-stained agarose gel, 10µl PCR product was digested with BslI restriction enzyme (New England Biolabs, USA; Cat# R0555S) at 55°C overnight, separated on 12% polyacrylamide gel and analysed on a GelDoc system (BioRad, USA) for genotype identification. The details of PCR conditions and parameters for genotype consideration are mentioned in Table 1 and Table 2 respectively. To confirm the PCR-RFLP results, we performed Sanger sequencing of five randomly selected cervical cancer as well as control samples. All the sequencing reactions were performed on 3730xl DNA Analyzer (Applied Biosystems, USA) using BigDye™ Terminator v3.1 kit (Applied Biosystems, USA; Cat# 4337454) as per manufacturer’s instructions. The 10µl sequencing reaction was comprised of 7.0µl BigDye™ Terminator v3.1 Ready Reaction Mix, 10pmol forward primer and 50ng PCR product. The sequencing results were analyzed on Sequencing Analysis Software version 5.3.1 (Applied Biosystems, USA).\n\nAbbreviations: TLR9, Toll-like receptor 9; FP, forward Primer; RP, Reverse Primer; PCR, Polymerase Chain Reaction; bp, base pairs\n\nStatistical analysis was performed on GraphPad Prism version 5.00 for Windows (GraphPad Software, USA). Age of patients and controls were compared using two-sided Student's t-test. Due to the presence of single genotype across all the samples no additional statistical association was performed.\n\n\nResults\n\nThe average age of cervical cancer patients (52.43±11.78 years) and controls (51.8±11.35 years) was comparable without any statistically significant difference (p=0.668). Histopathologic analysis revealed all the cervical cancer cases to be of squamous cell carcinoma type. According to FIGO analysis, 9 (8.2%), 39 (35.5%), 55 (50%) and 7 (6.3%) patients belonged to Stage I, II, III and IV respectively.\n\nPCR amplification revealed the presence of a single intact band of 337 bp (Figure 1; Dataset 118). A single genotype CC (Pro/Pro) was detected across all the sample types (Table 3; Dataset 219) which was evident by the presence of 166 bp, 136 bp and 35 bp DNA bands after RFLP assay (Figure 2; Dataset 320). Sanger sequencing of the randomly selected PCR products corroborated with RFLP results (Figure 3; Dataset 421).\n\nLane M is 100 bp molecular marker (Takara, Japan; Cat# RR820A), Lane 1 is negative control and Lanes 2–7 are tumor DNA showing PCR products of 337 bp. (Abbreviations: PCR, Polymerase Chain Reaction; TLR9, Toll-like receptor 9; bp, base pair).\n\nLane M is 100 bp molecular marker, Lane 1 is undigested PCR product and Lanes 2 to 6 are showing digested PCR products of 166 bp and 136 bp (35 bp band is not visible) by BslI enzyme representing CC genotype. (Abbreviations: PAGE, Polyacrylamide Gel Electrophoresis; RFLP, Restriction Fragment Length Polymorphism).\n\nSanger sequence electropherogram of (A) a healthy individual and (B) patient showing single peak (highlighted) of C allele of TLR9 C296T/ Pro99Leu SNP representing CC genotype. (Abbreviations: SNP, Single Nucleotide Polymorphism).\n\n\nDiscussion\n\nAlthough hrHPV infection is the primary etiological agent of cervical carcinogenesis, the role of host genetic factors, especially those associated with body immunity such as TLRs, cannot be ignored. TLR9 SNPs −1486 T/C and G2848A have been found to be contradictorily associated with cervical cancer risk. In Polish and Mexican populations both TLR9 −1486 T/C and G2848A polymorphisms were suggested to be risk factors for cervical carcinogenesis12,14. In two independent studies on Chinese population, a positive association with TLR9 G2848A SNP was detected22,23 but no involvement of TLR9 −1486 T/C was found23, however, the other study suggested −1486 T/C was not a contributory factor to cervical carcinogenesis13. From India, a single report on North Indian patients revealed a marginal role of TLR9 G2848A polymorphism with cervical cancer risk15.\n\nTo date, no report is available on the rare TLR9 Pro99Leu polymorphism in cancer, which has been shown to be associated with DNA ligand hyporesponsiveness in HeLa cell lines16. Considering the fact that cervical cancer is majorly caused by hrHPV infection and the TLR9 Pro99Leu polymorphism is associated with DNA ligand hyporesponsiveness, the present study investigated, for the first time, the role of the TLR9 Pro99Leu polymorphism in cervical cancer susceptibility. This is also the first report to study this polymorphism in any of the cancer types globally. Our results revealed the presence of a single genotype CC (Pro/Pro) among cases and controls, demonstrating no significance of the Pro99Leu polymorphism to cervical cancer susceptibility. A complete absence of Pro99Leu in our study population corroborates with the report of Lee and group (2006) where neither controls nor lung tuberculosis and sarcoidosis patients had the TLR9 Pro99Leu polymorphism24. Similarly, the Pro99Leu polymorphism was not detected among healthy Caucasians as well as pneumococcal disease, bacteraemia, and leprosy patients16. Moreover, according to dbSNP, the global MAF of this polymorphism is 0.0002, and our results, albeit on a smaller cohort, do solicit its rare polymorphism status. Therefore, a direct role of this SNP in cancer, as well as other diseases, seems a remote possibility. Nonetheless, a comprehensive analysis of a larger cohort covering a varied ethnic population globally is suggested to comprehend its role in microbial infection and/or disease susceptibility including cancer.\n\n\nConclusion\n\nThe preliminary data obtained from the present study does not suggest a role for the TLR9 Pro99Leu polymorphism in cervical cancer susceptibility. However, analysis on a larger cohort worldwide may provide more insights into the frequency distribution of Pro99Leu polymorphism and reveal its influential role in various human diseases including cancer.\n\n\nData availability\n\nDataset 1. Raw, unedited agarose gel images of PCR amplification of TLR9 gene segment for C296T/ Pro99Leu polymorphism from 50 samples consisting of 26 controls and 24 cervical cancer cases. Figure 1 is a representative picture of the same. 10.5256/f1000research.14840.d20340518\n\nDataset 2. Age, clinical stage and TLR9 genotype status among cervical cancer patients as well as age and TLR9 genotype status among controls. 10.5256/f1000research.14840.d20340619\n\nDataset 3. Raw, unedited polyacrylamide gel electrophoresis images of 27 controls and 24 cervical cancer PCR amplified products that underwent restriction fragment length polymorphism (RFLP) analysis. Figure 2 is a representative picture of the same. 10.5256/f1000research.14840.d20340720\n\nDataset 4. Nucleotide sequences spanning TLR9 gene segment for C296T single nucleotide polymorphism, obtained after performing Sanger sequencing on five samples each of cervical cancer and healthy controls. The sequencing results confirm the restriction fragment length polymorphism (RFLP) analysis that represents single genotype CC among all the study participants. Figure 3 A and B are representative electropherograms of the TLR9 C296T CC genotype as evident by the presence of single peak of C allele. 10.5256/f1000research.14840.d20340821\n\n\nEthical considerations\n\nThe research was carried out following due approval from ethics committee of all the participating institutes. Participants were verbally informed and explained about the study, and were provided with an information sheet. Written informed consent was obtained from the participants who agreed to enrol in the present study. Personal information of all the study subjects was kept confidential.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe study was funded by Charotar University of Science and Technology (CHARUSAT).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgement\n\nAuthors thank Dr. Anjana Chauhan, Gynec Cancer Surgeon, Ex Associate Professor, Gujarat Cancer and Research Institute, Ahmedabad, India for stimulating discussions.\n\n\nReferences\n\nFerlay J, Soerjomataram I , Dikshit R, et al.: Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012. Int J Cancer. 2015; 136(5): E359–386. PubMed Abstract | Publisher Full Text\n\nWalboomers JM, Jacobs MV, Manos MM, et al.: Human papillomavirus is a necessary cause of invasive cervical cancer worldwide. J Pathol. 1999; 189(1): 12–9. PubMed Abstract | Publisher Full Text\n\nChattopadhyay K: A comprehensive review on host genetic susceptibility to human papillomavirus infection and progression to cervical cancer. Indian J Hum Genet. 2011; 17(3): 132–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeo PJ, Madeleine MM, Wang S, et al.: Defining the genetic susceptibility to cervical neoplasia-A genome-wide association study. PLoS Genet. 2017; 13(8): e1006866. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumar H, Kawai T, Akira S: Toll-like receptors and innate immunity. Biochem Biophys Res Commun. 2009; 388(4): 621–5. PubMed Abstract | Publisher Full Text\n\nSchröder NW, Schumann RR: Single nucleotide polymorphisms of Toll-like receptors and susceptibility to infectious disease. Lancet Infect Dis. 2005; 5(3): 156–64. PubMed Abstract | Publisher Full Text\n\nMisch EA, Hawn TR: Toll-like receptor polymorphisms and susceptibility to human disease. Clin Sci (Lond). 2008; 114(5): 347–60. PubMed Abstract | Publisher Full Text\n\nEl-Omar EM, Ng MT, Hold GL: Polymorphisms in Toll-like receptor genes and risk of cancer. Oncogene. 2008; 27(2): 244–52. PubMed Abstract | Publisher Full Text\n\nGomaz AN: The polymorphisms in Toll-like receptor genes and cancer risk. 2012; 114(4): 461–9. Reference Source\n\nKawai T, Akira S: Toll-like receptor and RIG-I-like receptor signaling. Ann N Y Acad Sci. 2008; 1143: 1–20. PubMed Abstract | Publisher Full Text\n\nHemmi H, Takeuchi O, Kawai T, et al.: A Toll-like receptor recognizes bacterial DNA. Nature. 2001; 408(6813): 740–5. PubMed Abstract | Publisher Full Text\n\nRoszak A, Lianeri M, Sowińska A, et al.: Involvement of Toll-like Receptor 9 polymorphism in cervical cancer development. Mol Biol Rep. 2012; 39(8): 8425–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen X, Wang S, Liu L, et al.: A genetic variant in the promoter region of Toll-like receptor 9 and cervical cancer susceptibility. DNA Cell Biol. 2012; 31(5): 766–71. PubMed Abstract | Publisher Full Text\n\nMartínez-Campos C, Bahena-Román M, Torres-Poveda K, et al.: TLR9 gene polymorphism -1486T/C (rs187084) is associated with uterine cervical neoplasm in Mexican female population. J Cancer Res Clin Oncol. Springer Berlin Heidelberg; 2017; 143(12): 2437–2445. PubMed Abstract | Publisher Full Text\n\nPandey S, Mittal B, Srivastava M, et al.: Evaluation of Toll-like receptors 3 (c.1377C/T) and 9 (G2848A) gene polymorphisms in cervical cancer susceptibility. Mol Biol Rep. 2011; 38(7): 4715–21. PubMed Abstract | Publisher Full Text\n\nKubarenko AV, Ranjan S, Rautanen A, et al.: A naturally occurring variant in human TLR9, P99L, is associated with loss of CpG oligonucleotide responsiveness. J Biol Chem. 2010; 285(47): 36486–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSambrook J, Fritsch E, MT: Molecular Cloning: A Laboratory Manual. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press; 1989.\n\nChauhan A, Pandey N, Raithatha N, et al.: Dataset 1 in: Absence of toll-like receptor 9 Pro99Leu polymorphism in cervical cancer. F1000Research. 2018. Data Source\n\nChauhan A, Pandey N, Raithatha N, et al.: Dataset 2 in: Absence of toll-like receptor 9 Pro99Leu polymorphism in cervical cancer. F1000Research. 2018. Data Source\n\nChauhan A, Pandey N, Raithatha N, et al.: Dataset 3 in: Absence of toll-like receptor 9 Pro99Leu polymorphism in cervical cancer. F1000Research. 2018. Data Source\n\nChauhan A, Pandey N, Raithatha N, et al.: Dataset 4 in: Absence of toll-like receptor 9 Pro99Leu polymorphism in cervical cancer. F1000Research. 2018. Data Source\n\nJin Y, Qiu S, Shao N, et al.: Association of toll-like receptor gene polymorphisms and its interaction with HPV infection in determining the susceptibility of cervical cancer in Chinese Han population. Mamm Genome. Springer US; 2017; 28(5–6): 213–9. PubMed Abstract | Publisher Full Text\n\nLai ZZ, Ni-Zhang, Pan XL: Toll-like receptor 9 (TLR9) gene polymorphisms associated with increased susceptibility of human papillomavirus-16 infection in patients with cervical cancer. J Int Med Res. 2013; 41(4): 1027–36. PubMed Abstract | Publisher Full Text\n\nLee PL, West C, Crain K, et al.: Genetic polymorphisms and susceptibility to lung disease. J Negat Results Biomed. 2006; 5: 5. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "34771",
"date": "25 Jun 2018",
"name": "Gopeshwar Narayan",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWith the global minor allele frequency of 0.0002, the studied sample size is too small. It is interesting to observe that there is only one genotype (homozygous wild type) present in the studied cohort. The power of the study should be mentioned by the authors. The conclusion drawn from the limited data set may not reflect the real situation. I suggest the authors to first calculate the number of samples required for the study on the basis of the frequency of minor/major alleles to achieve about 80% power of study and increase the sample size accordingly.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3779",
"date": "29 Jun 2018",
"name": "Alex Chauhan",
"role": "Author Response",
"response": "Dear Dr. Gopeshwar Narayan, We thank you for approving our manuscript and appreciate your valuable suggestions. Please find below the response towards your reservations: Reservation: With the global minor allele frequency of 0.0002, the studied sample size is too small. It is interesting to observe that there is only one genotype (homozygous wild type) present in the studied cohort. The power of the study should be mentioned by the authors. The conclusion drawn from the limited data set may not reflect the real situation. I suggest the authors first to calculate the number of samples required for the study on the basis of the frequency of minor/major alleles to achieve about 80% power of study and increase the sample size accordingly. Response: Due to the complete absence of minor allele, the power of present study cannot be calculated. However, considering the global minor allele frequency of 0.0002 of the SNP, we calculated the power of study using Online Sample Size Estimator which was found to be 3.6%. To achieve 80% power of study approximately 40000 cases and controls are required. It is presently not possible for us to collect and analyze such a larger sample size. Studies on Pro99Leu polymorphism with similar sample size and results have also been reported by Kubarenko et al., 2010 and Lee et al., 2006, which have been cited in the article."
}
]
},
{
"id": "35718",
"date": "23 Jul 2018",
"name": "Balraj Mittal",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe basic assumption of genetic epidemiology is that two alleles of a gene are present in the population and one of the allele (usually minor) has altered frequency in cases and controls. However, if a gene is mono-morphic as is the case here for TLR9C296T, then it is non-informative and there is no point to look for its association.\nTherefore, the very basis of study is wrong, even though it has been carried out technically correct.\nI therefore feel that this publication will not anything the present knowledge.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "34120",
"date": "02 Aug 2018",
"name": "Bhudev C. Das",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have gone through the manuscript entitled “Absence of toll-like receptor 9 Pro99Leu polymorphism in cervical cancer” submitted by Alex Chauhan et al. for its publication in F1000Research. The authors have studied the polymorphisms of Toll-like receptor 9 (TLR9) Pro99Leu polymorphism in cervical cancer to examine its role in cervical carcinoma in elimination of viral pathogens though recognition of CpG islands. It is suggested that polymorphism of TLR9 gene may influence the recognitions of the above DNA sequence leading to elimination of infection. The authors have used as many as 110 cervical cancer samples and normal cervical smears from 141 healthy controls and employed PCR-RFLP and sequencing methods for detection of genotype variation. The authors could not find any variation in genotypes of TLR9 hence authors concluded that the specific polymorphism C296T/Pro99Leu has no role in cervical cancer.\n\nThis is a very clean and straight forward study which could not find any link between TLR9 polymorphism and cervical cancer. The data and the figures presented including PCR and PCR-RFLP and sequencing pictures are excellent and convincing. However, there are few points which need to be clarified/corrected before the manuscript is accepted and indexed. My comments are as follows:\nThe whole study is based on only one TLR9 gene polymorphism. Authors need to very clearly justify their choice of TLR9, and not other TLRs in Introduction as well as in the Discussion.\n\nIt is well established that the causative agent for cervical cancer is due to infection of specific types of high risk Human papillomaviruses. Any study on cervical cancer demands for an obvious correlation/association with HPV status of the cervical cancer. If at all no HPV analysis has been done, the authors must discuss this in the Discussion.\n\nThere are several English and grammatical errors throughout the manuscript. Authors need to carefully re-read the manuscript and correct the manuscript. Few obvious errors are indicated here:- i) Methods: 3rd line: reaction and restriction fragment length ii)Conclusion: 1st line – the present study demonstrates no involvement – (delete ‘results’), 3rd line – delete ‘worldwide’ and in 4th line – delete ‘participants’, replace it with ‘subjects iii)Table 1 last column below Visualized on: write 2% Agarose gel.\n\nData set 3: delete ‘unedited’.\nIn summary, the manuscript may be accepted for indexing after the authors made the minor revision of the manuscript as suggested above.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3913",
"date": "30 Aug 2018",
"name": "Alex Chauhan",
"role": "Author Response",
"response": "Dear Prof. B. C. Das, We thank you for approving our manuscript and appreciate your valuable suggestions. Please find below the response towards your reservations: Reservation 1: The whole study is based on only one TLR9 gene polymorphism. Authors need to very clearly justify their choice of TLR9, and not other TLRs in Introduction as well as in the Discussion. Response: The choice of TLR9 is based on the fact that it recognized HPV16 DNA which is a main causative agent of cervical cancer. As suggested this has been included in the introduction as well as discussion. Reservation 2: It is well established that the causative agent for cervical cancer is due to infection of specific types of high risk Human papillomaviruses. Any study on cervical cancer demands for an obvious correlation/association with HPV status of the cervical cancer. If at all no HPV analysis has been done, the authors must discuss this in the Discussion. Response 2: We do agree that cervical cancer is mainly caused by the infection of hrHPVs. We found approximately 70% of our cases with HPV positivity, that has been incorporated in the discussion. The details of HPV infection will be published elsewhere. Reservation 3: There are several English and grammatical errors throughout the manuscript. Authors need to carefully re-read the manuscript and correct the manuscript. Few obvious errors are indicated here:- i) Methods: 3rd line: reaction and restriction fragment length ii)Conclusion: 1st line – the present study demonstrates no involvement – (delete ‘results’), 3rd line – delete ‘worldwide’ and in 4th line – delete ‘participants’, replace it with ‘subjects iii)Table 1 last column below Visualized on: write 2% Agarose gel. Response 3: The above mentioned suggestions have been incorporated. Reservation 4: Data set 3: delete ‘unedited’. Response 4: Deleted."
}
]
}
] | 1
|
https://f1000research.com/articles/7-606
|
https://f1000research.com/articles/7-1361/v1
|
30 Aug 18
|
{
"type": "Data Note",
"title": "Complete plastome sequences of two Psidium species from the Galápagos Islands",
"authors": [
"Bryan Reatini",
"Maria de Lourdes Torres",
"Hugo Valdebenito",
"Todd Vision",
"Maria de Lourdes Torres",
"Hugo Valdebenito",
"Todd Vision"
],
"abstract": "We report the complete plastome sequences of an endemic and an unidentified species from the genus Psidium in the Galápagos Islands (P. galapageium and Psidium sp. respectively).",
"keywords": [
"plastome",
"Psidium",
"Galapagos",
"guayabillo"
],
"content": "Introduction\n\nOver a quarter of all vascular plant species are endemic to islands, making them hotspots of plant diversity and conservation (Kreft et al., 2008). In the Galápagos Islands, there are roughly 560 native species of plants of which approximately 32% are endemic (Lawesson et al., 1987). However, many of these endemic species have remained relatively unstudied since they were originally given scientific descriptions, making the study of the evolutionary histories of these unique taxa difficult. In the present study, we constructed the complete plastome sequences of two species of Psidium (guava) from the Galápagos Islands, one endemic and one currently unidentified in hopes of facilitating future work on the evolutionary relationships of these species.\n\n\nMethods\n\nThis research is authorized under the permit: MAE-DNB-CM-2016-004 in compliance with Ecuadorian regulations.\n\nLeaf samples were collected during May of 2017 from the Galápagos endemic Psidium galapageium Hook (commonly known as guayabillo) on the island of San Cristobal (0.89094°S, 89.43769°W) and from an unidentified Psidium species on the island of Santa Cruz (0.62313°S, 90.38581°W). Based on morphological similarity, the Psidium sp. individual is suspected to be P. acidum (Landrum, 2016), but no reference or barcode sequence from P. acidum is available for confirmation.\n\nLeaf tissue was desiccated immediately after harvesting using silica gel. DNA extractions were performed using a Qiagen DNeasy Plant mini kit (Qiagen, Inc.). Sequence data was generated in the form of paired-end, 150 bp reads using a KAPA library prep kit (Roche Sequencing) and sequenced on an Illumina HiSeq 4000 platform (Illumina, Inc.).\n\nReads were quality and adapter trimmed using Trim Galore! version 0.4.3 with a minimum phred score value of 20 and minimum read length of 50 bp. Filtered reads were then aligned to the Psidium guajava plastome reference available at NCBI (Accession: KX364403) using the mem function within BWA version 0.7.15 (Li & Durbin, 2009). Consensus plastome sequences were generated using the mpileup function within samtools version 1.8 followed by the call and consensus functions within bcftools with a minimum depth of coverage of 10x (Li et al., 2009). Using IRscope (Amiryousefi et al., 2018), the P. galapageium and Psidium sp. plastomes respectively were confirmed to contain a large single copy of 88,268 bp and 87,747 bp and a small single copy of 18,465 bp and 18,490 bp separated by two inverted repeats of 26,071 bp and 26,360 bp for total lengths of 158,875 bp and 158,957 bp (Figure 1).\n\nThe circular genomes have been linearized for illustration.\n\nAnnotations were generated using the program Plann (Huang & Cronk, 2015). Of the 132 gene features annotated previously in the Psidium guajava (guava) chloroplast genome on NCBI (Accession: KX364403), all were recovered in the Psidium sp. and P. galapageium plastome sequences. The non-identity of the two taxa sampled is evidenced by the absolute pairwise sequence divergence of the concatenated sequences of three conserved genes (MatK, psbA, and rbcL) which have been successfully used as barcodes previously in Psidium (Kress et al., 2009). Sequences were aligned using MUSCLE within MEGA version 7.0.26 (Tamura et al., 2007), and the number of nucleotide differences were counted between these alignments to estimate divergence. A total of 35 differences were observed among 4011 sites (0.87% uncorrected divergence) between P. guajava (Accession: KX364403) and P. galapageium, 45 differences (1.1%) between P. guajava and Psidium sp., and 40 differences (0.99%) between P. galapageium and Psidium sp.\n\n\nData availability\n\nVoucher specimens for P. galapageium and Psidium sp. are available at the Charles Darwin Research Station herbarium (Index Herbariorum code CDS) with accession numbers 3053515 and 3053562, respectively. The corresponding plastome sequences for P. galapageium and Psidium sp. are available at NCBI with accession numbers MH491846 and MH491847, respectively.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by a Louise Coker Fellowship from the University of North Carolina at Chapel Hill (UNC).\n\nAll funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors would like to thank the staff of the Galápagos Science Center, Galápagos National Park, Heinke Jäger and Diana Flores (Charles Darwin Foundation), and the High Throughput Sequencing Facility at UNC for assistance with fieldwork, curation, and sequencing. Special thanks to Marcelo Loyola for the invaluable help during the fieldwork.\n\n\nReferences\n\nAmiryousefi A, Hyvönen J, Poczai P: IRscope: An online program to visualize the junction sites of chloroplast genomes. Bioinformatics. 2018. PubMed Abstract | Publisher Full Text\n\nHuang DI, Cronk QC: Plann: A command-line application for annotating plastome sequences. Appl Plant Sci. 2015; 3(8): pii: apps.1500026. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKreft H, Jetz W, Mutke J, et al.: Global diversity of island floras from a macroecological perspective. Ecol Lett. 2008; 11(2): 116–127. PubMed Abstract | Publisher Full Text\n\nKress WJ, Erickson DL, Jones FA, et al.: Plant DNA barcodes and a community phylogeny of a tropical forest dynamics plot in Panama. Proc Natl Acad Sci U S A. 2009; 106(44): 18621–18626. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLandrum LR: Re-evaluation of Psidium acutangulum (Myrtaceae) and a new combination in Psidium. Brittonia. 2016; 68(4): 409–417. Publisher Full Text\n\nLawesson JE, Adsersen H, Bentley P: An Updated and Annotated Check List of the Vascular Plants of the Galapagos Islands. (Botanical Institute, University of Aarhus). 1987. Reference Source\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows–Wheeler transform. Bioinformatics. 2009; 25(14): 1754–1760. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–2079. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTamura K, Dudley J, Nei M, et al.: MEGA4: Molecular Evolutionary Genetics Analysis (MEGA) software version 4.0. Mol Biol Evol. 2007; 24(18): 1596–1599. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "37720",
"date": "03 Oct 2018",
"name": "Michael O. Dillon",
"expertise": [
"Reviewer Expertise Vascular plant systematist"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper reports the results of sequencing efforts of a species native to the Galapagos Islands. The ability to distinguish new species and especially \"crypto-species\" that are not always obvious due to similar comparative morphologies. The paper should be published and this is the appropriate venue.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": []
},
{
"id": "39032",
"date": "03 Oct 2018",
"name": "Carolyn Proença",
"expertise": [
"Reviewer Expertise Myrtaceae Systematics",
"Plant Phylogenetics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis appears to be a sound paper. Methods are modern, standard and referenced with versions cited when appropriate. Sequences are vouchered by herbarium specimens and have been deposited in Genbank. I have checked the accession numbers given and they are correct. Figure 1 is very clear and detailed, and the legend is adequate and complete.\nMy most important comment is to point out that very recently, Landrum (2017, in Canotia 13:1-101) has treated Psidium galapageium Hook.f. as a synonym of widespread P. oligospermum DC. He comments in his paper that “As recognized here Psidium oligospermum is a widespread and variable species” and that “A geographically broad study with molecular techniques of Psidium oligospermum, including related species ... would be valuable.” Reatini’s paper relates directly to this comment by Landrum (2017) and the fact that the independent species status of P. galapageium is not accepted by all should be made. Other studies however (Proença et al. 2014 Flora de Sergipe: Myrtaceae; and Tuler et al. 2017, Flora of Espírito Santo: Psidium (Myrtaceae). Rodriguesia 68:1791-1805) do not include P. galapageium within P. oligospermum.\nA minor quibble is with authorities for scientific names in the paper, that are either erroneous or missing. Psdium galapageium Hook.f., is the correct form (authority erroneously given in the paper as Hook who is actually a different botanist from Joseph Hooker who described the species). Psidium acidum (Mart. ex DC.) Landrum (authorities absent in the paper) is how this species should be cited.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Partly\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1361
|
https://f1000research.com/articles/7-111/v1
|
25 Jan 18
|
{
"type": "Opinion Article",
"title": "Real world evidence (RWE) – a disruptive innovation or the quiet evolution of medical evidence generation?",
"authors": [
"Sajan Khosla",
"Robert White",
"Jesús Medina",
"Mario Ouwens",
"Cathy Emmas",
"Tim Koder",
"Gary Male",
"Sandra Leonard",
"Robert White",
"Jesús Medina",
"Mario Ouwens",
"Cathy Emmas",
"Tim Koder",
"Gary Male",
"Sandra Leonard"
],
"abstract": "Stakeholders in healthcare are increasingly turning to real world evidence (RWE) to inform their decisions, alongside evidence from randomized controlled trials. RWE is generated by analysing data gathered from routine clinical practice, and can be used across the product lifecycle, providing insights into areas including disease epidemiology, treatment effectiveness and safety, and health economic value and impact. Recently, the US Food and Drug Administration and the European Medicines Agency have stated their ambition for greater use of RWE to support applications for new indications, and are now consulting with their stakeholders to formalize standards and expected methods for generating RWE. Pharmaceutical companies are responding to the increasing demands for RWE by developing standards and processes for each stage of the evidence generation pathway. Some conventions are already in place for assuring quality, whereas other processes are specific to the research question and data sources available. As evidence generation increasingly becomes a core role of medical affairs divisions in large pharmaceutical companies, standards of rigour will continue to evolve and improve. Senior pharmaceutical leaders can drive this change by making RWE a core element of their corporate strategy, providing top-level direction on how their respective companies should approach RWE for maximum quality. Here, we describe the current and future areas of RWE application within the pharmaceutical industry, necessary access to data to generate RWE, and the challenges in communicating RWE. Supporting and building on viewpoints from industry and publicly funded research, our perspective is that at each stage of RWE generation, quality will be critical to the impact that RWE has on healthcare decision-makers; not only where RWE is an established and evolving tool, but also in new areas that have the potential to disrupt and to improve drug development pathways.",
"keywords": [
"Real world evidence",
"Drug Discovery methods",
"Drug Industry methods"
],
"content": "Introduction\n\nIn March 2016, the US Food and Drug Administration (FDA) released a statement outlining the goals and procedures for the Prescription Drug User Fee Act (PDUFA) VI for 2018–2022, with notice that this would include the use of real world evidence (RWE) in regulatory decision-making. In December 2016, the 21st Century Cures Act became law in the USA, aiming to expedite approval for new medicines. Towards that aim, it included provision for RWE to be used in place of evidence from randomized controlled trials (RCTs), if judged appropriate by the FDA.\n\nRWE is derived from the analysis of data collected from a healthcare setting, outside the context of prescriptive RCTs1. One of the key objectives of RWE is to understand observations and events in patients in routine clinical practice. RWE complements RCTs, which are carefully controlled experiments to test specific hypotheses on the efficacy and safety of new drugs, and which by design do not reflect current clinical practice. Owing to the mechanism of data collection and experimental design, RWE studies generally cannot yield definitive causal inference because of the many confounders of variability.\n\nThe FDA aims to publish draft guidance for the use of RWE by October 2021, and consultation is already underway with healthcare sector stakeholders including the pharmaceutical industry. Both the FDA and the European Medicines Agency (EMA) have stated their wish to see increased use of RWE in supporting indications. In Asia, the growing maturity of real world data sources has led to the recent use of RWE in regulatory discussions, for example, in the decision in Japan on the use of raloxifene for the treatment of osteoperosis2. Indian regulatory authorities are also looking to embed routinely collected electronic health records into their decision-making process3.\n\nTo date, use of RWE by the pharmaceutical industry has primarily focused on the peri-launch period just before, and immediately after, marketing approval of a drug, to describe patient populations, to contribute towards knowledge of patient safety and to make judgements on comparative effectiveness between drugs. RWE is also regularly used in economic modelling and when establishing appropriate pricing for new therapeutic interventions. While details of the methods used vary between agencies, RWE is central to the healthcare technology assessments (HTA) by which payers judge if a new drug is cost-effective in their healthcare system4.\n\nEarlier stages of the clinical drug development pipeline are now starting to use RWE to support critical decisions. Experts in RWE from several large pharmaceutical companies have previously published their views on this topic, as have companies that specialize in generating RWE for the pharmaceutical industry, and an overview of results from an unstructured literature search is provided in Table 15–11. Pharmaceutical companies have recognized the demand for RWE from national regulators and other healthcare decision-makers; however, few to date have gained committed support at board level for RWE generation being a critical part of the business. With the focus on specialty care, progress in technology and increasing availability of real world data, the time is right to provide this support to ensure that patients can access the medicines they need.\n\nGSK, GlaxoSmithKline; PwC, PricewaterhouseCoopers; RWE, real world evidence.\n\nThis perspectives paper is aimed primarily at an industry audience with an interest in forming or expanding an RWE function, and looks to describe the planning, generation and communication of RWE (Figure 1), specifically:\n\n• the current and future areas of RWE application in the pharmaceutical industry\n\n• the source, quality of and access to data that are necessary to generate RWE\n\n• challenges in communicating RWE.\n\nAn outline of the main steps and some key questions in the planning, generation and communication of RWE. RWE, real world evidence.\n\n\nPotential for RWE in drug lifecycles\n\nThe pharmaceutical industry faces fresh challenges in finding ways to make its innovative medicines available to patients. The interests of healthcare system payers and regulators, and the need to measure disease burden, create a complex environment for quantifying clinical value.\n\nContinual observation of disease epidemiology, treatment patterns and outcomes in the real world can help to prioritize and to streamline medicine development, with the potential for accelerating evidence generation to support label expansion for specific products. All phases of medicine development can benefit from increased observation of the real world (Figure 2 and Table 2).\n\nThe questions that can be addressed by RWE and the functions involved in the generation or use of RWE at each stage of product development. RWE, real world evidence.\n\nRWE, real world evidence.\n\nRWE has the potential to be used early in drug discovery and development programmes, facilitating product development by identifying diseases or indications that represent a significant burden in populations. Electronic health records to support differentiation of patients’ needs have been used within the National Institutes of Health (NIH), and the ability to characterize patient populations before conducting a trial has enabled the NIH to design trials that accelerate innovative interventions to testing phase in patient subgroups of particular need12.\n\nTo ensure a clinical trial protocol has internal validity, trial design teams will often use a set of restrictive eligibility criteria that may remove from the trial large segments of a population with the disease of interest. The impact of these eligibility criteria is often not understood or in most cases is not tested until the question of generalizability is raised at the stage of regulatory or reimbursement submission13. This has been recognized as a limitation of RCTs by many regulators, including the FDA, in response to many approved medicines being withdrawn owing to safety problems being identified once a therapy has been exposed to a broad patient population14.\n\nIn order to license a therapy in a new indication or to expand the label into a new population, it is mandatory to establish evidence to support the efficacy claim. Traditionally, explanatory trials determine whether the intervention produces the expected result under controlled circumstances, generated through careful design of RCTs. As the need for larger RCTs increases, owing to low-rate event endpoints, potentially differential efficacy throughout subpopulations of patients and the need to observe larger populations for rare adverse events after intervention, the cost of running the trials increases. The time to run these trials also impacts on the potential profitability of indication expansion. Therefore, new thinking is required on how and if explanatory trials can leverage some of the features of real world trials to deliver accelerated efficacy studies.\n\nThe main features of an RCT are the randomization of patients, enrolment into a controlled trial setting and follow-up specified in a study protocol. Applying this concept while also using real world data may provide a hybrid approach to running pragmatic clinical trials. The levels of pragmatism can be understood within the context of the PRagmatic Explanatory Continuum Indicator Summary (PRECIS)-2 framework15. In the regulatory context, a balanced approach of using real world data to execute large-cohort phase 3 trials may generate enough of a reward to risk taking the step towards an innovative execution model. This hybrid approach to running studies has been taken in examples such as the Salford Lung Study (see Box 1)16,17.\n\nThe Salford Lung Study assessed the effectiveness and safety of fluticasone furoate in patients with chronic obstructive pulmonary disease (COPD). In this 12 month, open-label, phase 3, multicentre study, 2799 patients with COPD were randomized 1:1 to a once-daily inhaled combination of fluticasone furoate 100 μg and vilanterol 25 μg, or to continuation of their existing therapy. This collaborative study collected data using electronic health records of consenting patients across all of their interactions with general practitioners (GPs), pharmacists and hospitals. In total, 75 GP practices, 128 community pharmacies in Salford and South Manchester, and two hospitals participated in the study16,17. The primary objective of the Salford Lung Study pragmatic approach was to assess the effectiveness of the treatment, and pragmatic features were not primarily used to decrease costs or to increase speed of delivery. Although the cost of the Salford Lung Study has not been published, the expenditure incurred by the training of healthcare professionals and the development of a bespoke data collection system is likely to be high. The cost of such an approach should therefore be carefully evaluated before it is used to implement such a study.\n\nRegulators including the FDA, EMA and China Food and Drug Administration increasingly ask pharmaceutical companies to implement ‘post-marketing commitment’ studies as a condition of approval. In some cases, these commitments are requested after a product launch, for example, in light of new safety concerns. The studies may cover safety, efficacy, effectiveness or optimal use. One specific type of study, a post-authorization safety study, is usual for product authorizations: a large group of patients receiving the new medicine is tracked, often for a longer time period than covered by the registrational trial. Pharmaceutical companies are also obliged to enforce systems for spontaneous safety reporting, capturing and assessing adverse event data received from prescribing physicians. These data are consolidated into reports for regulators, and are typically used for pharmacovigilance rather than for public reimbursement by each country’s national and local bodies, based on its effectiveness and safety, value for money and affordability. These are the key questions covered by health technology assessments, answered by health economic models that use data from RCTs and RWE studies, plus financial estimates and calculations4.\n\nPhysicians also need to know how best to use new treatments in the broad patient population, not just in the restricted clinical trial sample. To give prescribers, guideline committees and formularies confidence to offer the medicines to patients, companies and independent investigators run retrospective and prospective RWE studies, showing outcomes from treatments in their region18.\n\n\nReal world data\n\nData collected in a routine healthcare setting must be stringently curated, validated and standardized to enable the generation of robust RWE1. Primary real world data are generated specifically for the purposes of the research, through prospective collection from diagnostic or monitoring procedures. Secondary real world studies use data that were routinely collected for medical or administrative purposes – such as electronic health (or medical) records and administrative claims databases – for the generation of RWE. More recently, a complementary source of real world data, generated directly by patients, has emerged from the growth of health-focused online communities and research networks. Sources such as the PatientsLikeMe research network, in which patients are encouraged to share health data in a structured, standardized format, give more scope for formal research. While structured patient-generated data sources such as PatientsLikeMe can lead to useful evidence (see Box 2), researchers should be aware of the limitations of data generated outside the healthcare environment, such as the challenge of how to validate the data. Examples of available real world data sources are provided in Table 3, and a detailed overview of their benefits and limitations is provided in the GetReal RWE Navigator.\n\nPatientsLikeMe has an established and engaged community of patients with amyotrophic lateral sclerosis (ALS), a rapidly progressive and fatal neurodegenerative condition with no effective treatments. Approximately 9% (348) of patients with ALS in the PatientsLikeMe community reported using lithium carbonate, a drug which had shown promise in a small study (16 treated patients, 28 controls)20, but which did not have regulatory approval. This offered the opportunity to conduct an observational study of drug usage and disease progression from quantitative data recorded by members of the PatientsLikeMe community. The 149 patients who fulfilled inclusion criteria for the study were matched with multiple controls (447 patients in total) based on their prior disease progression. Disease progression was measured using the Revised ALS Functional Rating Scale, which measures patient-reported functional impairment in domains such as speech, swallowing, walking, arm function and respiratory function. No difference in disease progression was observed after 12 months between the overall study group and those patients in the lithium carbonate treatment group (78 patients). Subsequent randomized studies reached the same conclusion that there was no clinical effect in the overall population, although genotype subgroups were associated with variations in response to treatment21. The approach described in this case study has many limitations and cannot be considered a substitute for double-blind RCTs. However, it does suggest that data reported by patients in online health communities may be useful for accelerating clinical discovery and evaluating the effectiveness of drugs already in use.\n\naFurther information on data sources available at: https:///rwe-navigator.eu.\n\nbTaking account of privacy and unstructured data considerations.\n\nARO, academic research organization; CPRD, Clinical Practice Research Datalink; CRO, contract research organization; NHS, National Health Service; PCORnet, National Patient-Centered Clinical Research Network.\n\nA scan for data availability and curation for research projects is a necessary step to ensure that the correct choices are made before designing study concepts. This review step is a well-defined process providing knowledge of vendors or research organizations that are able to provide access to data for research purposes. These data can be procured by the pharmaceutical industry and managed and governed within the industry. Data are gathered into an organization by specific RWE functions, which have been formalized by drawing relevant knowledge, processes and people from more established functions such as epidemiology, health economics and observational research, market access/payer divisions, medical affairs, patient safety and health informatics.\n\nAccess to real world data can be categorized into three forms: commercial, research collaborations and developmental collaborations. Each form of data access has implications for budget, time and the research objective. In commercial data access, a data asset is already available and may be able to address a research question; a vendor will therefore allow access to the data asset through commercial contracts. If an established commercial process is not defined, owing to local regulations regarding the commercialization of either a data asset or an academic affiliation, research collaborations can facilitate access to the data. This might be the case for access to clinical registry data and to general data in Europe and Asia. Finally, in a developmental collaboration there is a focus on working with a group to develop a data asset that can meet the needs of a research project or the design of a prospective study that will enable the curation of specific data elements from patients in a real world setting.\n\nReal world data must be robust and of high quality to generate valuable evidence that meets the need of healthcare decision-makers. Several guidelines have been developed in recent years to aid investigators in the design and execution of real world studies. Currently, there is still no widely accepted consensus as to which one should be used22. From the perspective of industry sponsors of studies that generate real world evidence, there are many considerations that go beyond scientific, medical or methodological quality in the planning, generation and communication of RWE (Table 4)23–26. Ultimately, the pharmaceutical industry must conduct research that provides the data or evidence that is required, that is acceptable to healthcare decision-makers and that leads to optimal health outcomes for patients while avoiding the misuse of resources.\n\nFAIR, Findability, Accessibility, Interoperability and Reusability; FINER, Feasible to answer, Interesting, Novel, Ethical, Relevant; MOOSE, Meta-analysis Of Observational Studies in Epidemiology; PICO, Patients, Intervention, Comparators, Outcomes; RECORD, REporting of studies Conducted using Observational Routinely-collected health Data; RWE, real world evidence; STROBE, STrengthening the Reporting of OBservational studies in Epidemiology.\n\n\nCommunication of RWE\n\nGenerating robust, high-quality RWE is not sufficient on its own; the pharmaceutical industry must also use this evidence effectively and within frameworks defined by regulators at a country or regional level. In 2017, the FDA released draft guidance on proactive communication of healthcare economic information from the pharmaceutical industry to payers; this was a response to the revision of Section 114 of the FDA Modernization Act through the 21st Century Cures Act27,28. Section 114 was written to enable the pharmaceutical industry to communicate healthcare economic information more readily to payers and formulary decision-makers, lowering the threshold required for proactive communication from ‘substantial evidence’ to ‘competent and reliable scientific evidence’. The scope of the legislation does not extend to the proactive communication of clinical comparisons; here, the ‘substantial evidence’ threshold still applies, requiring evidence from RCTs. The proactive use of healthcare economic information permitted through Section 114 does not extend to communication with healthcare professionals or patients. In addition to regulatory limitations, and in contrast to the evidence generated by RCTs, healthcare decision-makers may not be aware of what RWE is or how to interpret it. The pharmaceutical industry may be challenged on the robustness of their RWE, perhaps owing to concerns over a lack of randomization in the study of interest or a perception that bias cannot be addressed in real world studies29. Variations in terminology and a lack of transparency in reporting real world studies add to the challenges in communicating the value of RWE to healthcare decision-makers. Several organizations have established initiatives with the objective of raising awareness of RWE and providing training. The GetReal project, established in Europe by the Innovative Medicines Initiative, brought together representatives from the pharmaceutical industry and other healthcare stakeholders to develop resources and training that provide guidance in the planning, generation and communication of RWE. A recent editorial has highlighted how certain challenges in communicating RWE can be overcome30. There is a need for greater transparency in reporting how evidence from a real world study is generated, such as explaining the choice of data source or methodology applied. These efforts will help to ensure that healthcare decision-makers can make informed decisions when assessing RWE alongside evidence from RCTs.\n\n\nConclusions\n\nRWE complements the evidence generated by RCTs and provides healthcare decision-makers with the confidence to choose the right treatment options for patients. Established types of RWE, such as post-marketing safety surveillance, will continue to evolve, adding value to the evidence base for marketed products, and RWE is now embedded and evolving in the reimbursement and regulatory spaces. Beyond this, however, there is an opportunity for positive disruption in pharmaceutical organizations, where decisions and the execution of clinical development, pipeline prioritization and early development can be driven by RWE. This disruption may reduce barriers for drug development, pushing the pharmaceutical industry to become more agile and innovative as it targets increasingly specific patient populations at an unprecedented pace. While companies recognize the need for RWE, greater strategic direction is needed to maximize its impact on health outcomes and commercial success. The challenge for industry is to adapt in order to utilize the full range of RWE, appropriately, in an environment of changing technology and regulations. Industry also has a responsibility, together with academic support, to make use of its knowledge in ambitions to drive the evolution of medicine development and to disrupt the way evidence is generated. Strategic coordination among local markets, global organizations and external collaborators will raise data quality standards and build international confidence in the planning, generation and communication of RWE.",
"appendix": "Competing interests\n\n\n\nSajan Khosla, Robert White, Jesús Medina and Sandra Leonard are employees of AstraZeneca, in the Medical Evidence and Observational Research team. Mario Ouwens is an employee of AstraZeneca in the Advanced Analytics team. Cathy Emmas is an employee of AstraZeneca in the Patient Centricity team. Tim Koder and Gary Male are employees of Oxford PharmaGenesis, whose work on this manuscript has been funded by AstraZeneca.\n\n\nGrant information\n\nDevelopment of this manuscript was funded by AstraZeneca; writing was undertaken as part of an internal project by employees of AstraZeneca and Oxford PharmaGenesis.\n\n\nAcknowledgements\n\nWe are grateful for the review contributions of Howard G Hutchinson, James L. Gaskill and Frangiscos Sifakis, who are employees of AstraZeneca. We would also like to thank Claire Stoker and Colin Glen, who are employees of Oxford PharmaGenesis, for their review contributions. Scientific editorial support was provided to the authors by editors at Oxford PharmaGenesis, Oxford, UK, funded by AstraZeneca.\n\n\nReferences\n\nBerger M, Daniel G, Frank K, et al.: A framework for regulatory use of real-world evidence. Duke Margolis Center for Health Policy White Paper. 2017. (Accessed 26 October 2017). Reference Source\n\nTanaka S, Yamamoto T, Oda E, et al.: Real-world evidence of raloxifene versus alendronate in preventing non-vertebral fractures in Japanese women with osteoporosis: retrospective analysis of a hospital claims database. J Bone Miner Metab. 2018; 36(1): 87–94. PubMed Abstract | Publisher Full Text\n\nDang A, Vallish BN: Real world evidence: An Indian perspective. Perspect Clin Res. 2016; 7(4): 156–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMakady A, Ham RT, de Boer A, et al.: Policies for use of real-world data in health technology assessment (HTA): a comparative study of six HTA agencies. Value Health. 2017; 20(4): 520–32. PubMed Abstract | Publisher Full Text\n\nEpstein RS, Sidorov J, Lehner JP, et al.: Integrating scientific and real-world evidence within and beyond the drug development process. J Comp Eff Res. 2012; 1(1 Suppl): 9–13. PubMed Abstract | Publisher Full Text\n\nHughes B, Kessler M: Breaking new ground with RWE: how some pharmacos are poised to realize a $1 billion opportunity. IMS Health White Paper. 2014. (Accessed 9 November 2017). Reference Source\n\nBerger ML, Lipset C, Gutteridge A, et al.: Optimizing the leveraging of real-world data to improve the development and use of medicines. Value Health. 2015; 18(1): 127–30. PubMed Abstract | Publisher Full Text\n\nRonicke V, Ruhl M, Solbach T: Revitalizing pharmaceutical R&D. The value of real word evidence. Strategy&. 2015. (Accessed 9 November 2017). Reference Source\n\nCommunicating comparative effectiveness research and real world evidence with population health decision makers. GSK U.S. Public Policy Position Paper. (Accessed 9 November 2017). Reference Source\n\nGalson S, Simon G: Real-world evidence to guide approval and use of new treatments. National Academy of Medicine, Washington, DC. 2015. (Accessed 9 November 2017). Reference Source\n\nNews, views and insights from leading international RWE experts. Quintiles IMS. AccessPoint. (Accessed 9 November 2017). Reference Source\n\nJohnson KE, Tachibana C, Coronado GD, et al.: A guide to research partnerships for pragmatic clinical trials. BMJ. 2014; 349: g6826. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHe Z, Chandar P, Ryan P, et al.: Simulation-based evaluation of the generalizability index for study traits. AMIA Annu Symp Proc. 2015; 2015: 594–603. PubMed Abstract | Free Full Text\n\nSchmidt AF, Groenwold RH, van Delden JJ, et al.: Justification of exclusion criteria was underreported in a review of cardiovascular trials. J Clin Epidemiol. 2014; 67(6): 635–44. PubMed Abstract | Publisher Full Text\n\nLoudon K, Treweek S, Sullivan F, et al.: The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015; 350: h2147. PubMed Abstract | Publisher Full Text\n\nBakerly ND, Woodcock A, New JP, et al.: The Salford Lung Study protocol: a pragmatic, randomised phase III real-world effectiveness trial in chronic obstructive pulmonary disease. Respir Res. 2015; 16(1): 101. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVestbo J, Leather D, Diar Bakerly N, et al.: Effectiveness of fluticasone furoate-vilanterol for COPD in clinical practice. N Engl J Med. 2016; 375(13): 1253–1260. PubMed Abstract | Publisher Full Text\n\nJanson C, Larsson K, Lisspers KH, et al.: Pneumonia and pneumonia related mortality in patients with COPD treated with fixed combinations of inhaled corticosteroid and long acting β2 agonist: observational matched cohort study (PATHOS). BMJ. 2013; 346: f3306. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWicks P, Vaughan TE, Massagli MP, et al.: Accelerated clinical discovery using self-reported patient data collected online and a patient-matching algorithm. Nat Biotechnol. 2011; 29(5): 411–14. PubMed Abstract | Publisher Full Text\n\nFornai F, Longone P, Cafaro L, et al.: Lithium delays progression of amyotrophic lateral sclerosis. Proc Natl Acad Sci U S A. 2008; 105(6): 2052–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Eijk RPA, Jones AR, Sproviero W, et al.: Meta-analysis of pharmacogenetic interactions in amyotrophic lateral sclerosis clinical trials. Neurology. 2017; 89(18): 1915–1922. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorton SC, Costlow MR, Graff JS, et al.: Standards and guidelines for observational studies: quality is in the eye of the beholder. J Clin Epidemiol. 2016; 71: 3–10. PubMed Abstract | Publisher Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvon Elm E, Altman DG, Egger M, et al.: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. J Clin Epidemiol. 2008; 61(4): 344–9. PubMed Abstract | Publisher Full Text\n\nStroup DF, Berlin JA, Morton SC, et al.: Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000; 283(15): 2008–12. PubMed Abstract | Publisher Full Text\n\nBenchimol EI, Smeeth L, Guttmann A, et al.: The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statement. PLoS Med. 2015; 12(10): e1001885. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerfetto EM, Burke L, Oehrlein EM, et al.: FDAMA Section 114: Why the Renewed Interest? J Manag Care Spec Pharm. 2015; 21(5): 368–74. PubMed Abstract | Publisher Full Text\n\nDrug and device manufacturer communications with payors, formulary committees, and similar entities – questions and answers guidance for industry and review staff draft guidance. U.S. Department of Health and Human Services Food and Drug Administration. (Accessed 9 November 2017). Reference Source\n\nWhite R, Carter G, Willet J: RWE: A brave new world for the medical publications professional. The map newsletter (International Society for Medical Publication Professionals (ISMPP). 2017. (Accessed 9 November 2017). Reference Source\n\nWhite R: Building trust in real-world evidence and comparative effectiveness research: the need for transparency. J Comp Eff Res. 2017; 6(1): 5–7. PubMed Abstract | Publisher Full Text\n\nFarrugia P, Petrisor BA, Farrokhyar F, et al.: Practical tips for surgical research: research questions, hypotheses and objectives. Can J Surg. 2010; 53(4): 278–81. PubMed Abstract | Free Full Text"
}
|
[
{
"id": "30250",
"date": "29 Jan 2018",
"name": "Marc L. Berger",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a nice summary of the state of play with respect to real world evidence.\nSome minor changes and additions would be worthwhile:\nPage 3: 2nd paragraph, last sentence: Owing to the mechanism of data collection and experimental design, RWE studies generally cannot yield definitive causal inference....\" The word cannot should be changed to may not. It is a matter of discussion whether one can impute causality vs causation from observational studies (see Berger et al.1)\n\nPage 5: Discussion of Phase 1-3 Clinical Study Design and Table 2: RWE can inform clinical study design in other ways including assessing size of recruitment pool of patients with different sets of inclusion or exclusion criteria; estimating treatment effect size for sample size estimation; simulating trials in advance; and recruitment of sites.\n\nPage 7: Table 3: Next to Electronic health/medical records, Flatiron Health should be added to the list data owner/curators. They are currently working with the FDA on data quality issues.\n\nPage 8, Table 4: Next to communication, the authors should cite the various ISPOR publications on the subject Including: - Caro et al.2; - Recommendations from the joint ISPOR‐ISPE Special Task Force on real‐world evidence in health care decision making, co-published in 2 papers in ViH3 and PEpi and Drug Safety4; - Wang et al.5,6.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "35542",
"date": "02 Jul 2018",
"name": "Mattias Kyhlstedt",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article provides a good overview of the topic. I would suggest the following adjustments / comments:\nFollowing this statement “Owing to the mechanism of data collection and experimental design, RWE studies generally cannot yield definitive causal inference because of the many confounders of variability.”\nI think it would be good to reference Eichler et al., “Bridging the Efficacy–Effectiveness Gap.”1. About the implications of moving from traditional RCT to RWE and the implications that comes with this. i.e one should actively consider the confounders, not only identify that is a potential problem.\n\nTable 1:\nBeing the co-author of the article : “Deriving more value from RWE to ensure timely access of medicines by patients”2. I think this would be an appropriate reference. I would suggest that this would add the following perspective:\nThe structured search to identify the relevant real world data sources as an essential step for successful RWE execution.\n\nTable 4:\nIn the planning phase it is essential to understand the confounding factors that may impact the outcome question of interest in order to take this in to account in the study design / evaluation of the RWE study.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-111
|
https://f1000research.com/articles/7-1353/v1
|
29 Aug 18
|
{
"type": "Opinion Article",
"title": "Long-term preservation of biomedical research data",
"authors": [
"Vivek Navale",
"Matthew McAuliffe",
"Matthew McAuliffe"
],
"abstract": "Genomics and molecular imaging, along with clinical and translational research have transformed biomedical science into a data-intensive scientific endeavor. For researchers to benefit from Big Data sets, developing long-term biomedical digital data preservation strategy is very important. In this opinion article, we discuss specific actions that researchers and institutions can take to make research data a continued resource even after research projects have reached the end of their lifecycle. The actions involve utilizing an Open Archival Information System model comprised of six functional entities: Ingest, Access, Data Management, Archival Storage, Administration and Preservation Planning. We believe that involvement of data stewards early in the digital data life-cycle management process can significantly contribute towards long term preservation of biomedical data. Developing data collection strategies consistent with institutional policies, and encouraging the use of common data elements in clinical research, patient registries and other human subject research can be advantageous for data sharing and integration purposes. Specifically, data stewards at the onset of research program should engage with established repositories and curators to develop data sustainability plans for research data. Placing equal importance on the requirements for initial activities (e.g., collection, processing, storage) with subsequent activities (data analysis, sharing) can improve data quality, provide traceability and support reproducibility. Preparing and tracking data provenance, using common data elements and biomedical ontologies are important for standardizing the data description, making the interpretation and reuse of data easier. The Big Data biomedical community requires scalable platform that can support the diversity and complexity of data ingest modes (e.g. machine, software or human entry modes). Secure virtual workspaces to integrate and manipulate data, with shared software programs (e.g., bioinformatics tools), can facilitate the FAIR (Findable, Accessible, Interoperable and Reusable) use of data for near- and long-term research needs.",
"keywords": [
"Open",
"Archival",
"Information",
"System",
"Biomedical",
"Data",
"Preservation",
"Access"
],
"content": "Introduction\n\nOver the past decade, major advancements in the speed and resolution of acquiring data has resulted in a new paradigm, ‘Big Data.’ The impact of Big Data can be seen in the biomedical field. Billions of DNA sequences and large amounts of data generated from electronic health records (EHRs) are produced each day. Continued improvements in technology will further lower the cost of acquiring data, and by 2025, the amount of genomics data alone will be astronomical in scale1. In addition to large data sets and the large number of data sources, challenges arise from the diversity, complexity and multimodal nature of data generated by researchers, hospitals, and mobile devices around the world. Research programs like the All of Us Research Program envision using Big Data to transform healthcare from case-based studies to large-scale data-driven precision medicine endeavors2.\n\nHarnessing the power of digital data for science and society, requires developing management strategies that enable data to be accessible and reusable for immediate and future research needs. With the preponderance of bigger datasets, the volume, variety and magnitude of biomedical data generation is significantly higher than existing analytical capabilities. The time lag between data accumulation and thorough analysis will result in more data being passive or inactive for extended time intervals. Meaningful associations for data reuse for applications beyond the purpose for which it was collected will also be a time-intensive endeavor. Therefore, our opinion is that attention should be focused on developing a data preservation strategy that can ensure biomedical data availability for longer term access and reuse.\n\n\nModel for long-term data preservation\n\nChallenges to manage vast amount of data from space missions led to the development of the Open Archival Information System (OAIS) model3. The OAIS model defined “as an archive that consists of an organization of people and systems with responsibility to preserve information and make it available for a designated community” provides the framework for long term preservation of data4.\n\nThe functional model (Figure 1) illustrates that during the Ingest process, Submission Information Packages (SIP) are produced. Metadata and descriptive information are important for developing Archival Information Packages (AIP) for data storage. Metadata can include attributes that establish data provenance, authenticity, accuracy, and access rights. The Dissemination Information Packages (DIP) are produced in response to queries from consumers. The OAIS model includes six functions (shown in Figure 1) - Ingest, Access, Data Management, Archival Storage, Administration and Preservation Planning. Figure 1 contains a pictorial representation of the OAIS model.\n\nInformation flow within the OAIS model is by means of “packages”, SIP, AIP and DIP with the related interfaces (both solid and dotted lines) that show the interaction between the various functions5. Various OAIS implementations have led to development of digital repository systems (e.g. Dspace, Fedora) and customized repositories (e.g. the US National Oceanic and Atmospheric Association). Reproduced with permission from The Consultative committee for Space Data Systems (https://public.ccsds.org/pubs/650x0m2.pdf). The source for this OAIS implementation was originally provided by Ball (2006) (http://www.ukoln.ac.uk/projects/grand-challenge/papers/oaisBriefing.pdf)6.\n\nThe wide variety of examples illustrate that the OAIS model is content and technology agnostic. Therefore, we posit that the model can be used for developing biomedical digital data preservation strategy. In the following sections, we contextualize the functional aspects of the model needed for successful implementation of biomedical data repository ecosystems.\n\n\nPreservation planning\n\nAs shown in Figure 1, preservation planning is an important bridge between the data producers and consumers. During the planning stage several questions (some of which are listed below) must be addressed:\n\n• How will data be collected and managed?\n\n• What data (and metadata) is required for establishing provenance?\n\n• What type of common data elements and bio-ontology are needed?\n\n• How will data curation be carried out for the data sets?\n\n• Which data types will be stored and preserved?\n\n• How will data access be provided?\n\n• What methods are needed to maintain data quality?\n\nIn the past these questions have been the responsibility of biomedical data custodians and curators working in libraries, archives and repositories, who are usually engaged during the latter part of data lifecycle management (during data preservation and access services). We think that importance should be placed on data preservation during the planning of initial activities (e.g. collection, processing, storage), along with the ensuing activities (data analysis, sharing and reuse). In our opinion, developing a community of data stewards for biomedical research programs within institutions is an important step towards long-term preservation of biomedical data.\n\nConsidering the interdisciplinary skill set needed for data stewards, we propose that institutions leverage the expertise of staff (e.g. biologists, physicians, informaticians, technologists, library science specialists, etc.) for their respective biomedical research programs. We envision data steward teams to be engaged early in research data lifecycle management, developing digital data stewardship plan(s) for biomedical data sets. These activities can promote a culture of semantic scientists for biomedical programs, which can help reduce the time and cost of data interpretation by biocurators7.\n\nWe think that establishing data stewards’ responsibility within the biomedical research program can improve data quality, provide traceability, and support reproducibility.\n\nTypically, a designated community for biomedical data are researchers of a sub-discipline for a disease (e.g. cancer). Reviewing the research sponsors’ requirements, understanding the volume and types of data to be collected, and defining how the data will be organized and managed can all promote the reuse of data8.\n\nGoodman et al. provide a short guide to consider when caring for scientific data. The guide highlights the use of permanent identifiers, depositing data in established repository archives and publishing code and workflows that can facilitate data use/reuse9.\n\nIt is also essential that research group leaders and institutions emphasize data management best practice principles10. An important practice for ensuring good research management in laboratories includes selecting the right medium (paper-based and/or electronic) for laboratory notebooks11.\n\n\nAdministration\n\nBoth producers and consumers of data will be best served by implementing established procedures for digital preservation. Producers of biomedical data should develop a comprehensive data management plan (DMP) that addresses policies and practices needed to acquire, control, protect and deliver data, and the steps needed for the preservation and reuse of data12.\n\nAs a first step, data stewards should establish a DMP to identify the types of data that will be collected, provide information on the organization of data, assign roles and responsibilities for description of the data and document processes and procedures for Ingest and methods for data preservation and dissemination.\n\nData collection strategies need to be established in context of institutional policies for biomedical archives. We recommend that DMP be used as a planning tool to communicate all operations performed on data, and details of software used to manage data. Williams et al. provide a comprehensive review of data management plans, their use in various fields of biomedical research, and reference material for data managers13.\n\nAs part of administration, data stewards should engage with a designated community (data creators, funding agencies, stakeholders, records managers, archivists, information technology specialists) to appraise the data to determine whether all the data produced during the research program should be preserved, or whether different data types (raw, processed, etc.) require different degrees of preservation (e.g. temporary with a time stamp for review or permanent indefinitely).\n\nEstablishing data provenance should be part of data collection and management strategy. This may not be always easy, because contextual information (metadata) about experimental data (wet/dry lab) and workflows is often captured informally in multiple locations, and details of the experimental process are not extensively discussed in publications. Contacting the original source for additional information may or may not yield fruitful results, and the reproducibility of experiments becomes challenging in many cases.\n\nSecurity controls should be part of data collection and management strategy. For initial security controls assessment, guidance documents, FISMA, NIST-800-53 and FIPS, can provide tools for an organizational risk assessment and validation purposes14,15. A wide range of issues involving ethical, legal and technical boundaries influence biomedical data privacy, which can be specialized for the type of data being processed and supported16. Important points to consider are confidentiality, disclosure specifications, data rights ownership, and eligibility criteria to deposit data to an established repository.\n\n\nIngest\n\nCapturing relevant data from the experiment in real time can be one of the better practices for establishing biomedical data provenance. Automated metadata capture when possible (using a laboratory information management system), and digitization where automation is not possible, can reduce errors, minimize additional work and ensure data and metadata integrity17. We believe that establishing data provenance will result in successful preparation of SIP during ingest (Figure 1).\n\nSIP for clinical research, patient registries and other human subject research can be developed by use of common data elements (CDEs). A CDE is defined as a fixed representation of a variable to be collected within a clinical domain. It consists of a precisely defined question with a specified format or a set of permissible values for responses that can be interpreted unambiguously in human and machine-computable terms. There are many examples of CDE usage and information on CDE collections, repositories, tools and resources available from the National Institutes of Health (NIH) CDE Resource Portal18. The advantage of using CDEs was highlighted by the Global Rare Disease Repository (GRDR), where researchers integrated de-identified patient clinical data from rare disease registries and other data sources to conduct biomedical studies and clinical trials within and across diseases19.\n\nOntologies are useful for annotating and standardizing the data description so that the querying and interpretation of data can be facilitated. Selecting a biontology requires knowledge about the specific domain, including current understanding of biological systems. Several ontologies have been reported for various biological data and can be selected for research data20. An online collaborative tool (e.g. OntoBrowser) can be used to map reported terms to preferred ontology (code list), which can be useful for data integration purposes21. We believe that use of CDEs and Ontologies can result in developing AIP for long-term preservation of biomedical data.\n\n\nData management\n\nData authenticity, accuracy and reliability influence data quality. For that purpose, controls from the very beginning of research (as part of DMP) need to be established. For experimental work, instrument calibration and validation of data analysis methods contributes significantly to the quality of data produced in a lab. Currently, many approaches for data quality assessment exist and their strengths and weakness have been discussed22. The most common approach for obtaining a first look at the quality of new data is by reviewing supporting data provided with research articles that contextualizes data to support the research goals and conclusions. Additional quality assessment is obtained by the evaluation provided by data producers and data curators and, when appropriate, with automated processes and algorithms.\n\nIn the context of Electronic Health Records (EHR), the five dimensions of data quality are: completeness, correctness, concordance, plausibility and currency. The data quality assessment of these dimensions has been carried out by one or more of the seven categories: comparison with gold standards, data element agreement, data source agreement, distribution comparison, validity checks, log review, and element presence23. Validated and systematic methods for EHR data assessment are important, and with shared best practices, the reuse of EHR data for clinical research can be promoted.\n\nWe think data curation needs should be assessed during DMP. One of the ways to assess data curation is by using a Data Curation Maturity model24. The model assumes that new areas of research (evolving areas) may not have best practice(s) from the very beginning, but having an indicator to show maturity levels at different stages of an organization or group in performing tasks enables in improving curation practices. A staging approach is proposed to aid in developing good practices (and even best practices) and identify ineffective practices for various tasks so that the quality of data can be improved upon from the beginning. The maturity model can be useful for determining steps that are needed to improve data quality.\n\nWe opine that data stewards should engage with established repositories and develop data sustainability plans. Many well-known biomedical repositories are known to host wide ranges of biomedical data (for example, GenBank for nucleotide sequence, Gene Expression Omnibus for microarray and high throughput gene expression data, miRbase for annotated sequences, dbSNP for single nucleotide polymorphism (SNPs), Protein Data Bank for 3D structure data for macromolecules (proteins and nucleic acids), RefSeq for non-redundant DNA, RNA and protein sequences. Additionally, disease-specific repositories for traumatic brain injury and Parkinson’s disease are also available25.\n\nA tabulated listing of 21 established life science repositories with various types of user support services (e.g. for visualization, data search, analysis, deposition, downloads, and online help) is also available26. Additional helpful resources, re3data.org: the Registry of Research Data Repositories, can be used to identify appropriate repositories for storage and search of research data27.\n\n\nArchival storage\n\nBoth raw and processed data are produced during biomedical research. Therefore, developing a storage roadmap is important and should consider data types, volume, data format and the applications required for current and future processing. Broadly, file, object and block are three types of data storage options available to biomedical researchers28.\n\nFile Storage has been used for storing large and smaller scale biomedical datasets providing direct and rapid access to local computing resources, including High Performance Computing clusters. Object Storage is ideal for systems that need to grow and scale for capacity. Block Storage is useful when the software application needs to tightly control the structure of the data, usually the case with databases.\n\nDepending on access needs a tiered data storage strategy can be used for migrating data from high input/output (I/O) disks to lower I/O media, like magnetic tapes. Data Storage strategy should consider at least two types of media (disks and tapes) to mitigate the probability of data loss due to media failure. In addition, primary and backup copy of data should be stored at two different geographically separated locations (at least several hundred miles apart).\n\nConsidering the diversity, complexity and increasing volume of biomedical research data, we posit that cloud based platforms can be leveraged to support varieties of ingest modes (e.g. machine, software or human entry modes) to make data findable, accessible, interoperable and reusable (FAIR)29. In our opinion, a cloud-based data archive platform (shown in Figure 2) can provide a dynamic environment for managing research data life cycle along with capabilities for long-term preservation of biomedical data30.\n\nFigure adapted from Navale and Bourne (2018)30.\n\n\nAccess\n\nNeeds in biomedical research can vary from simple queries (as shown in Figure 1) to a wide range of capabilities (workflow and software tools) usually employed for analysis of large scale data sets (genomics data)31. Dissemination information package (DIP) can result from discovery search engines, e.g. DataMed32, and machine readable methods (e.g. repositive.io) for extracting new knowledge from the datasets33, with online available resources for digital sharing purposes34.\n\nBroadly speaking, access to data and metadata can be discussed in terms of the web and Application Programming Interface (API). In the web mode (Figure 2), user utilizes an interactive “browser” that presents overviews, summaries, and familiar search capabilities. In the API mode, the same underlying data and metadata can be consumed by a computer. The API mode is composed of a set of protocols and instructions that can serve the needs of both software developers and users. APIs commonly use Representational State Transfer (REST)35. REST utilizes the standard ‘http’ protocol access to manipulate data or metadata, and standards and toolsets for developing, documenting, and maintaining REST-based APIs are available36. In our opinion, type of API adoption will be driven by research questions and user community needs, evident from the comparison of three Genomics API’s (Google Genomics, SMART Genomics, and 23andMe)37,38.\n\nFor longer term access needs using file formats that have a good chance of being understood in the future is one of the ways of overcoming technology obsolescence. File formats that have characteristics of “openness”, “portability”, and maintain “quality” are better choices for long term preservation needs.\n\nInformation on the data types (e.g. text, image, video, audio, numerical), structure and format are essential for ensuring that it can be used and reused over time. Access to data will be greatly enhanced if data are archived in “open formats” not restricted by proprietary software, hardware, and/or by purchase of a commercial license. Some examples of open data formats in use are: comma or tab-separated values (csv or tsv) for tabular data, hierarchical data format and NetCDF for structured scientific data, portable network graphics for images, Open Geospatial Consortium format for geospatial data, and extensible markup language for documents39. If proprietary formats are used for initial data collection and analysis work, it should be exported to an open format for archival purposes. In some cases, proprietary formats have become standard formats when popularity and utility have driven tools and algorithms purpose-built to ingest and modify those formats (e.g., Affymetrix .CEL and .CDF formats).\n\nWe also think that the reuse of preserved data can be enhanced by the open availability of client software to user communities. One example is Bioconductor for genomic data40. In addition, developing and applying ontology-driven transformation and integration processes can result in open biomedical repositories in semantic web formats41.\n\n\nConclusion\n\nValuing, protecting, enabling access, and preserving data resources for current and future needs of researchers, laboratories, institutes and citizens is a critical step in maturing the biomedical research process of any organization or community.\n\nWith advent of Big Data, biomedical researchers need to become more proficient in understanding and managing research data throughout its lifecycle. Establishing the responsibilities of data stewards within the biomedical research program can improve data quality, provide traceability and support reproducibility. Determining specifically what to preserve and for how long are policy decisions that require data steward teams to engage with funding agencies, designated communities and established repositories.\n\nWe opine that the likelihood of maintaining the authenticity, accuracy and reliability of biomedical data for longer-term access will be enhanced by application of the OAIS model. Implementation of the model for biomedical data sets will provide renewed opportunities for data integration, analysis and discovery for basic, translational and clinical research domains.\n\n\nData availability\n\nNo data are associated with this article.",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors thank Mr. Denis von Kaeppler and Mr. William Gandler, Center for Information Technology, and Dr. Sean Davis, Center for Cancer Research at the National Cancer Institute, National Institutes of Health for discussions and suggestions during the preparation of the manuscript. The opinions expressed in the paper are those of the authors and do not necessarily reflect the opinions of the National Institutes of Health.\n\n\nReferences\n\nStephens ZD, Lee SY, Faghri F, et al.: Big Data: Astronomical or Genomical? PLoS Biol. 2015; 13(7): e1002195. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCollins FS, Varmus H: A new initiative on precision medicine. N Engl J Med. 2015; 372(9): 793–795. PubMed Abstract | Publisher Full Text | Free Full Text\n\nISO 14721:2012 - Space data and information transfer systems -- Open archival information system (OAIS) -- Reference model. 2018; [cited 1 Aug 2018]. Reference Source\n\nWikipedia contributors: Open Archival Information System. In: Wikipedia, The Free Encyclopedia. 2018; [cited 1 Aug 2018]. Reference Source\n\nStandard ISO: 14721: 2003: Space Data and Information Transfer Systems - Open Archival Information System Reference Model. International Organization for Standardization. 2003. Reference Source\n\nBall A: Briefing Paper: The OAIS Reference Model. UKOLN: University of Bath. 2006. Reference Source\n\nHaendel MA, Vasilevsky NA, Wirz JA: Dealing with data: a case study on information and data management literacy. PLoS Biol. 2012; 10(5): e1001339. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMichener WK: Ten Simple Rules for Creating a Good Data Management Plan. PLoS Comput Biol. 2015; 11(10): e1004525. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoodman A, Pepe A, Blocker AW, et al.: Ten simple rules for the care and feeding of scientific data. PLoS Comput Biol. 2014; 10(4): e1003542. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchreier AA, Wilson K, Resnik D: Academic research record-keeping: best practices for individuals, group leaders, and institutions. Acad Med. 2006; 81(1): 42–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchnell S: Ten Simple Rules for a Computational Biologist’s Laboratory Notebook. PLoS Comput Biol. 2015; 11(9): e1004385. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNational Library of Medicine, National Institutes of Health: Key Elements to Consider in Preparing a Data Sharing Plan Under NIH Extramural Support. U.S. National Library of Medicine; 26, Jun 1012, updated 3 Jan 2013 [cited 19 Jun 2017]. Reference Source\n\nWilliams M, Bagwell J, Nahm Zozus M: Data management plans: the missing perspective. J Biomed Inform. 2017; 71: 130–142. PubMed Abstract | Publisher Full Text\n\nNational Institute of Standards, Technology: FIPS 200, Minimum Security Requirements for Federal Information and Information Systems. CSRC, 2006; [cited 7 Feb 2018]. Reference Source\n\nO’Reilly PD: Federal Information Security Management Act (FISMA) Implementation Project. Created June 12, 2009; updated March 19, 2018. Reference Source\n\nMalin BA, Emam KE, O'Keefe CM: Biomedical data privacy: problems, perspectives, and recent advances. J Am Med Inform Assoc. 2013; 20(1): 2–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKazic T: Ten Simple Rules for Experiments' Provenance. PLoS Comput Biol. 2015; 11(10): e1004384. PubMed Abstract | Publisher Full Text | Free Full Text\n\nU.S. National Library of Medicine: first published 18 June, 2012; updated 29 March 2016. Reference Source\n\nRubinstein YR, McInnes P: NIH/NCATS/GRDR® Common Data Elements: A leading force for standardized data collection. Contemp Clin Trials. 2015; 42: 78–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMalone J, Stevens R, Jupp S, et al.: Ten Simple Rules for Selecting a Bio-ontology. PLoS Comput Biol. 2016; 12(2): e1004743. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRavagli C, Pognan F, Marc P: OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts. Bioinformatics. 2017; 33(1): 148–149. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeonelli S: Global Data Quality Assessment and the Situated Nature of “Best” Research Practices in Biology. Data Science Journal. 2017; 16: 32. Publisher Full Text\n\nWeiskopf NG, Weng C: Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research. J Am Med Inform Assoc. 2013; 20(1): 144–151. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlqasab M, Embury SM, Sampaio S: A Maturity Model for Biomedical Data Curation. Reference Source\n\nNavale V, Ji M, McCreedy E, et al.: Standardized Informatics Computing platform for Advancing Biomedical Discovery through data sharing. bioRxiv. 2018. Publisher Full Text\n\nKirlew PW: Life Science Data Repositories in the Publications of Scientists and Librarians. [cited 31 Oct 2017]. Publisher Full Text\n\nPampel H, Vierkant P, Scholze F, et al.: Making research data repositories visible: the re3data.org Registry. PLoS One. 2013; 8(11): e78080. PubMed Abstract | Publisher Full Text | Free Full Text\n\nData Storage Best Practices. In: Fred Hutch Biomedical Data Science Wiki. [cited 22 Jul 2018]. Reference Source\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. Nature Publishing Group; 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNavale V, Bourne PE: Cloud computing applications for biomedical science: A perspective. PLoS Comput Biol. 2018; 14(6): e1006144. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu B, Madduri RK, Sotomayor B, et al.: Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses. J Biomed Inform. 2014; 49: 119–133. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOhno-Machado L, Sansone SA, Alter G, et al.: Finding useful data across multiple biomedical data repositories using DataMed. Nat Genet. 2017; 49(6): 816–819. PubMed Abstract | Publisher Full Text\n\nCorpas M, Kovalevskaya NV, McMurray A, et al.: A FAIR guide for data providers to maximise sharing of human genomic data. PLoS Comput Biol. 2018; 14(3): e1005873. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJagodnik KM, Koplev S, Jenkins SL, et al.: Developing a framework for digital objects in the Big Data to Knowledge (BD2K) commons: Report from the Commons Framework Pilots workshop. J Biomed Inform. 2017; 71: 49–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFielding RT: Architectural Styles and the Design of Network-based Software Architectures. Taylor RN, Dissertation Committee Chair. PhD, University of California, Irvine. 2000. Reference Source\n\nThe Linux Foundation and Open API Initiative: Open API Initiative. In: Open API Initiative. [cited 8 Aug 2017]. Reference Source\n\nSwaminathan R, Huang Y, Moosavinasab S, et al.: A Review on Genomics APIs. Comput Struct Biotechnol J. 2015; 14: 8–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheemalapati S, Chang YA, Daya S, et al.: Hybrid Cloud Data and API Integration: Integrate Your Enterprise and Cloud with Bluemix Integration Services. IBM Redbooks; 2016. Reference Source\n\nHart EM, Barmby P, LeBauer D, et al.: Ten Simple Rules for Digital Data Storage. PLoS Comput Biol. 2016; 12(10): e1005097. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavis S, Meltzer PS: GEOquery: a bridge between the Gene Expression Omnibus (GEO) and BioConductor. Bioinformatics. 2007; 23(14): 1846–1847. PubMed Abstract | Publisher Full Text\n\nCarmen Legaz-García MD, Miñarro-Giménez JA, Menárguez-Tortosa M, et al.: Generation of open biomedical datasets through ontology-driven transformation and integration processes. J Biomed Semantics. 2016; 7: 32. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "37761",
"date": "13 Sep 2018",
"name": "Jane Greenberg",
"expertise": [
"Reviewer Expertise Metadata",
"Ontologies",
"Semantics",
"Linked data",
"Data management",
"Economics of metadata",
"Big data"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a timely scientific undertaking and stands as an important, original contribution to the body of research on data preservation in the context of big data, biomedical research, and data management/archiving. Furthermore, the research makes important links to key underpinnings and research covering data quality, modeling, along with metadata, and ontologies.\n\nMuch of the big data research to date has focused on algorithmic work, visualization, and related topics, whereas data preservation and archival research has been driven largely from the context of institutional repositories that are not necessarily storing big data.\n\nThis paper bridges these two research areas, using the OAIS model as a platform to weave together these topics, and provide salient discussion about keys pillars of covering preservation planning, administration, ingest, data management, archival storage, and access. The research is further contextualized by the FAIR principles that seek to ensure that data not only findable and accessible, but also interpretable and reusable. Further, the authors hone in on the role of data modeling, metadata, including the provenance model and ontologies.\n\nThe writing is excellent, the synthesis of the literature and integration of other research is solid. Additionally, the diagrams are illustrative. (Note, this my first review in this system, and I am super excited and pleased to have had the opportunity to serve as a reviewer for this original research, and eager to share with colleagues. In fact, have already shared this link with colleagues, so they can include this piece as a key reading across related courses as our academic quarter at Drexel is almost underway).\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "38250",
"date": "17 Sep 2018",
"name": "George Alter",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article explaines the importance of engaging data stewards in planning for data preservation at the beginning of data collection. It makes an important contribution by offering useful guidance about each stage of the data life cycle, such as the use of CDEs and ontologies, the Data Curation Maturity model, and open formats. Data would be more plentiful, better documented, and easier to reuse if this planning took place.\n\nThe following suggestions are intended to strengthen and expand the current draft of the paper. Since this is explicitly an \"opinion article,\" some of these comments reflect my own opinions, which the authors may not share.\n1. Under \"Mandatory Responsibilities\" the OAIS standard says that an archive must: \"Ensure that the information to be preserved is Independently Understandable to the Designated Community. In other words, the community should be able to understand the information without needing the assistance of the experts who produced the information.\" I think a statement like this provides a focus for the activities involved in preparing data for prservation. OAIS is not simply about assuring that the data survive. Preservation also assures that the data will be reusable (the 'R' in FAIR) in the future. It might be worthwhile to point out the relevance of OAIS for FAIR earlier in the paper.\n2. Although Figure 1 suggests that the SIP arrives at the archive fully formed, the text of the OAIS standard emphasizes that the relationship between the archive and the data producer may involves a lot of negotiation. Data repositories often need to contact the data producer several times to get the information that they need. This is a costly process, and the recommendations in this paper would reduce those costs. This is worth mentioning, because data producers often only see the costs of preparing data for sharing.\n3. The discussion of CDEs and ontologies could also mention the Center for Expanded Data Annotation and Retrieval (CEDAR), which has developed tools for creating metadata.\n4. When choosing a data repository, data stewards should favor repositories that offer an assurance of permanence and trustworthiness. This is especially important in the biomedical community, because valuable data have been lost when repositories and databases closed. There are several bodies that certify data repositories as trustworthy. ISO has a \"Standard for Trusted Digital Repositories\" (ISO 16363), which is very comprehensive and usually involves an external auditor. The CoreTrustSeal has a smaller list of requirements and relies on a self-audit.\n5. Confidentiality and disclosure are mentioned in a paragraph about security controls, but this deserves a little more space. Data producers set the terms of future reuse of data when they make informed consent agreements with subjects. If data sharing is not anticipated in the informed consent agreement, it is very difficult to share the data. Since the informed consent agreement is supervised by an IRB, the IRB should also approve plans for future data sharing. These plans could involve an array of legal (data use agreements) and technical (anonymization, \"data enclaves\") measures to protect confidential information.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "37763",
"date": "19 Sep 2018",
"name": "Chaitan Baru",
"expertise": [
"Reviewer Expertise Data science",
"database systems",
"informatics",
"scalable data systems."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn sum, this paper is highlighting the importance and need for \"data stewardship\", viz., well-considered data management plans for every project that produces/generates data. Data stewardship plans should be developed early in a project cycle--indeed, along with the development of the science/research goals of the project itself.\nThe authors caution that this problem becomes even more urgent in the era of \"big data\". They recommend use of an existing approach, viz., the Open Archival Information System (OAIS).\nThe issues mentioned and the approaches suggested are very reasonable, and very much in step with similar concerns and approaches in several other domains, which are all facing the data deluge.\nIn fact, I have heard so much discussion and read so many articles on this topic--supporting the approaches described in this paper as well--that I am now concerned that, as a community, we are probably not taking the right approach to this problem.\nFirst, the only way that the community may pay attention and spend resources on this problem is if they see value. These type of articles should probably begin with the value of doing this work, rather than the cost. Almost all articles on this topic talk in detail about the costs, and simply presume that the value exists. Value could be demonstrated by showing real science examples that benefited from archival data; examples of studies that went into archival data and found something new and interesting. Or, conversely, studies that duplicated effort, or failed in other ways, for not digging into archival data.\nSecond, curation is not a static process. The costs of curation, done properly, may actually be way more than what these papers suggest. Since science is not static, the relevance or \"meaning\" of a particular dataset is also not static. Data may become less or more valuable as the field progresses. All of that speaks to what I would call \"continuous curation\" of scientific data, and not just one-time curation at the time of creation.\nFinally, what to do when everyone is extolling the need and virtue of curation but no one is spending nearly enough resources to do the job right. One would think that this is the classic use case for AI and \"smart\" techniques. Why not let the computer do the job that no human is willing or able to do for the amount of money we are willing to spend. Rather than AI, this may be the classic use case for IA--intelligence augmentation, with a human in the loop.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "37766",
"date": "24 Sep 2018",
"name": "David Giaretta",
"expertise": [
"Reviewer Expertise I am an expert in digital preservation - see www.iso16363.org and www.giaretta.org"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe reason I ticked \"Partly\" in the above checkboxes is that the article has omitted clear discussion of the key concepts in OAIS for preservation, which are also key to the \"R\" in FAIR i.e. Reuse.\nThese may be addressed by including a discussion of the OAIS Information Model. OAIS defined Long Term Preservation as The act of maintaining information, Independently Understandable by a Designated Community, and with evidence to support its Authenticity, over the Long Term.\nTo explain what I mean briefly, to ensure understandability the archive should collect the appropriate Representation Information, and ensure that as the Designated Communities Knowledge Bases, the Representation Information must be supplemented. For example if the ontology, which is used to understand the biomedical data, goes out of use over time, for example the URL for its location no longer works, the archive will need to ensure that the original ontology remains available.\n\nSimilarly, as evidence for Authenticity the archive should collect Provenance about the data, as is briefly mentioned in the article.\nThe Archival Information Package, which is the AIP shown in the Functional Model, is a way to ensure that the archive has captured all the information required for Long Term Preservation, including Representation Information, Provenance Information and several other items. In order to create the AIP for a dataset the archive should ensure that the required information is captured during Ingest, and is maintained over time.\nThe data management plan should help to ensure that the appropriate information is captured over the course of the project in order to provide to the archive.\nOne last point which is worth mentioning is that the evaluation of the ability of an archive is also possible through ISO 16363 audit and certification.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": []
},
{
"id": "38251",
"date": "09 Oct 2018",
"name": "Ravi Madduri",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article titled \"Long-term preservation of biomedical research data” by Navale et.al, is a timely article that highlights the need for a long term strategy for preservation of data products generated from research projects. Often times a lot of time is spent in the initial activities of a research project which involves data collection, processing, analysis, sharing and publishing but substantially less time is spent in curating the data, making data reusable and finally long term preservation of data products. The paper presents a strategy for long term data preservation which the authors have broken down into multiple stages. This reviewer agrees with the strategy and the overall presentation of the strategy in the manuscript. There is, however, one important challenge in long term data management that this reviewer felt has not been covered adequately which is the economics of long term data preservation. Determining which data products are preserved and for how long is an important part of the puzzle. Additionally, the economics of long term storage along with who the stakeholders are and what the incentives for this to happen is also important. The article would be made better with these additions.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1353
|
https://f1000research.com/articles/7-42/v1
|
10 Jan 18
|
{
"type": "Research Article",
"title": "Longitudinal comparison of the humoral immune response and viral load of Porcine Circovirus Type 2 in pigs with different vaccination schemes under field conditions",
"authors": [
"Diana S. Vargas-Bermudez",
"Andrés Díaz",
"José Darío Mogollón",
"Jairo Jaime",
"Andrés Díaz",
"José Darío Mogollón"
],
"abstract": "Background: Porcine Circovirus type 2 (PCV2) infections are distributed worldwide and cause Porcine Circovirus Associated Disease (PCVAD). To minimize the impact of PCV2 infection on swine health and production, different vaccination schemes have been used since 2006. However, the association between vaccination schemes, virus load and disease under field conditions are not completely understood. Therefore, the objective of this study was to compare the effect of two different PCV2 vaccination schemes on the humoral response and PCV2 load in pigs after weaning under field conditions. Methods: Two commercial pig farms (Farm A and B), endemically infected with PCV2, which were using two different PCV2 subunit vaccinations schemes for sow, gilts and piglets, were selected. We designed a longitudinal study and measured IgG levels by ELISA and virus load by quantitative PCR in pigs after weaning. Forty 3-week old piglets were randomly selected at weaning and followed for 20 weeks. IgG levels and virus loads were compared within and between farms and considered statistically different if the non-parametric Kruskal Wallis test p value was lower than 0.05. Results: We found that low virus loads were maintained in pigs from both farms regardless of the vaccination scheme used (p>0.05). However, there was significant difference in the mean IgG levels observed over time (p<0.05), suggesting that different humoral immune response are not necessarily associated with different virus loads observed over time. Conclusions: These results are important because they can help to prevent PCV2 infections using different vaccination schemes to minimize the effect of PCVAD on swine health and production.",
"keywords": [
"Porcine Circovirus type 2 (PCV2)",
"PCV2 vaccines",
"IgG anti PCV2",
"viral loads."
],
"content": "Introduction\n\nPorcine circovirus type 2 (PCV2) belongs to the Circoviridae family. It is a non-enveloped icosahedral virus with a single-stranded circular DNA genome that contains 1766 to 1768 nucleotides (Fenaux et al., 2004; Guo et al., 2010). The PCV2 genome contains four open reading frames (ORFs), namely ORF1, ORF2, ORF3 and ORF4 (Allan et al., 2012; Xiao et al., 2015). ORF1 encodes the Rep and Rep´ proteins required for viral replication, ORF2 encodes the immunogenic capsid protein (Cap) (Fenaux et al., 2004), ORF3 encodes a protein involved in apoptosis (ORF3 protein)(Liu et al., 2005) and ORF4 encodes a protein that affects the activity of CD4+ and CD8+ cells (He et al., 2013). Additionally, the nucleotide diversity of ORF2 sequences allows to differentiate five different PCV2 genotypes denominated PCV2a, PCV2b, PCV2c, PCV2d (formerly known as mutant PCV2b) (Davies et al., 2016; Franzo et al., 2015b; Xiao et al., 2015), and PCV2e (Wang et al., 2009). PCV2a and PCV2b are distributed worldwide, although PCV2b is more prevalent than PCV2a (Opriessnig et al., 2013). Until 2015, PCV2c was only reported in Denmark (Dupont et al., 2008); however, it is now reported in feral pigs in Brazil (Franzo et al., 2015a). Additionally, PCV2d is found in several countries, including China, Brazil, and USA (Franzo et al., 2015a; Guo et al., 2010; Xiao et al., 2015; Zhai et al., 2011). Moreover, the distant PCV2 genotype (PCV2e) is found in China (Wang et al., 2009; Zhai et al., 2011) and the USA (Davies et al., 2016). In Colombia, PCV2 infections have been described since 2002 and have been recently characterized (Rincón Monroy et al., 2014).\n\nSeveral syndromes collectively named Porcine Circovirus Associated Disease (PCVAD) are associated with PCV2 infections, and high PCV2 viral loads have been associated with disease severity (Olvera et al., 2004). PCVAD include PCV2-subclinical infection (PCV2-SI), PCV2 systemic disease (PCV2-SD, initially named as post-weaning multisystem wasting syndrome (PMWS)), PCV2-reproductive disease (PCV2-RD), porcine dermatitis and nephropathy syndrome (PDNS), respiratory complex and enteritis (Segalés, 2012; Shen et al., 2010). PCV-SD is considered the most economically significant condition for the swine industry among all PCVAD (Segalés, 2012).\n\nPCVAD prevention is mainly based on vaccination against PCV2 infections (Feng et al., 2014; Fort et al., 2009). PCV2 vaccination is effective in reducing viral load, viral shedding, and PCV2-SD associated lymphoid lesions (Cline et al., 2008; Fachinger et al., 2008; Fort et al., 2008; Park et al., 2014). Vaccination can also induce neutralizing antibodies and IFNɣ secreting cells (IFNɣ SCs), which facilitates viral clearance (Fort et al., 2009; Martelli et al., 2011). Additionally, PCV2 vaccination can minimize the effect of PCV2 infection on swine health improving average daily weight gain (ADWG) and reducing mortality, especially in the presence of co-infection with other viruses (Fachinger et al., 2008; Horlen et al., 2008; Jacela et al., 2011; Kixmöller et al., 2008; Park et al., 2014).\n\nThere are at least four different types of commercial PCV2 vaccines based on the PCV2a genotype worldwide (Opriessnig et al., 2014; Park et al., 2014) that are effective at reducing the impact of PCV2a and PCV2b infections (Fort et al., 2008). One inactivated vaccine contains whole PCV2 as the antigen, and is recommend for 3-week old piglets or breeding females (Beach & Meng, 2012; Segalés, 2015). In contrast, chimeric PCV1-2 vaccine contains the immunogenic capsid gene of PCV2a cloned into the genome backbone of the non-pathogenic PCV1 (Segalés, 2015). Moreover, subunit recombinant vaccines express the capsid protein within a baculovirus system (Shen et al., 2010; Trible & Rowland, 2012) and are recommended for pigs between 2 and 4 weeks of age. However, off-label use of the chimeric vaccines in sows and gilts can result in the reduction of viremia and increased ADWG in the offspring (Fraile et al., 2012; Segalés, 2015). Vaccination of sows seeks to reduce viremia and viral loads in piglets through neutralizing antibodies present in colostrum, and could improve the productive performance of their offspring after weaning (Beach & Meng, 2012; Gerber et al., 2011; Pejsak et al., 2010). Moreover, vaccination of the piglet is used to induce active humoral and cellular immunity, reduce viral loads, shorten duration of viremia, and improve productive performance (Fachinger et al., 2008; Fraile et al., 2012; Lyoo et al., 2011; Takahagi et al., 2010). Currently, it is feasible to vaccinate sows, piglets, or both (Fraile et al., 2012; Opriessnig et al., 2010), although the interference between maternally derived antibodies and active immunity of the piglet is under debate (Fraile et al., 2012).\n\nAlthough it is well known that vaccination reduces the clinical presentation of the disease, limited information is available regarding the effect of different PCV2 vaccination schemes on virus load and humoral immune response over time under field conditions. Therefore, the objective of this study was to compare the effect of two different PCV2 vaccination schemes on the humoral response and PCV2 load in pigs after weaning. Our results indicated that different vaccination schemes against PCV2 induce different humoral immune responses overtime without a difference in the viral load observed. These results are important because they can help to prevent PCV2 infections and minimize the effect of PCVAD on swine health and production.\n\n\nMethods\n\nFor this study two commercial pig farms in Colombia (Farm A and B), endemically infected with PCV2, were conveniently selected. While Farm A was a 500-sow farrow-to-finish farm, Farm B was 250-sow farrow-to-wean farm, with two additional sites for the nursery and finishing stages of production. Farm A vaccinated all sows and gilts (replacement animals for the breeding stock) against PCV2 every six months and all piglets on a weekly basis at 3 weeks of age. In contrast, Farm B vaccinated all gilts at arrival and piglets at 3 and 5 weeks of age on a weekly basis.\n\nForty 3-week old piglets were randomly selected at weaning in each farm. Each pig was ear tagged and randomly assigned to two treatments groups: non-vaccinated pigs (n=10) and PCV2 vaccinated pigs (n=30). Piglets with different treatments were comingled among other pigs after weaning based on the farmer’s production system. Animal care and procedures at the farms were in accordance with the guidelines of the \"Porcine Animal Welfare\" guide (Pork Colombia, former Colombian Association of Pig Farmers), which is based on the concept of the five freedoms (established by the Welfare Council of Farm Animals, 1992 in the United Kingdom). The pens are in cement with plastic Slat zones, water troughs with water ad libitum, feeders and a rest area in straw. The densities were managed according to the weight of the pigs following the recommendations of guideline 2008/120/EC. Pigs were injected intramuscularly on the right side of the neck at 3 weeks of age with 1ml of commercial subunit vaccine A (VAC-A) in Farm A or 2ml of commercial subunit vaccine B (VAC-B) in Farm B. Additionally, pigs in Farm B were boosted with VAC-B at 5 weeks of age. Individual blood samples (10 ml) were collected by jugular venipuncture at 3, 7, 11, 15, 19 and 23 weeks of age (W3, W7, W11, W15, W19, and W23, respectively).\n\nIgG antibodies against PCV2 were evaluated by ELISA using the INGEZIM Circo IgG1.1® assay (Ingenasa-Spain) at 450nm on a BioTek® Power Wave XS OD system with a cutoff value of 0.3, according to the manufacturer´s instructions.\n\nAdditionally, PCV2 viral loads were estimated over time using quantitative polymerase chain reaction (qPCR) (Olvera et al., 2004) in a Light Cycler® 480 II-Roche thermal cycling system. Briefly, DNA extractions were first performed from all serum samples collected using QIAamp DNA kit (QIAGEN®). Then PCV2 rep coding region of PCV2 was amplified using PCV2-ABF 5´GCCAGAATTCAACCTTMACYTTYC 3´and PCV2-ABR 5´GCGGTGGACATGMTGAGATT 3´ primers, as previously described (Rincón Monroy et al., 2014). PCR reactions were carried out in 20µl containing 5μl of DNA mixed with 15μl of real-time PCR master mix (Light Cycler® 480 SYBR Green I Master-Roche mix + 1μM of each primer) at 95°C for 1 minute followed by 40 cycles of 95°C for 1 minute, 61°C for 25 seconds and 72°C for 5 seconds. Additionally, a plasmid (PCR blunt vector plasmid) containing the complete PCV2 genome was used as positive control (kindly donated by Dr. Carl A. Gagnon, Swine and poultry infectious diseases research center -CRIPA, Université de Montréal, St-Hyacinthe, Québec, Canada). Ten-fold dilutions of the plasmid (from 109 to 101 PCV2 plasmid copies/ ml) were used as standard curve for PCV2 quantification. The cutoff level to diagnose animals as PMWS positive was established at 107 PCV2 genomes/ml, according to previous studies (Olvera et al., 2004). Piglets with viral loads lower than 107 were considered asymptomatic animals (Olvera et al., 2004). Data analysis was done using the corresponding software (Light Cycler® 480 II-Roche).\n\nMean IgG and PCV2 copies/ml were compared within and between VAC groups and considered statistically different if the non-parametric Kruskal Wallis test p value was lower than 0.05. Additionally, the linear association between ELISA titters and the viral load was estimated at each sampling event and considered statistically significant if the null hypothesis of slope equal to 0 was rejected. The software used was R statistics version 3.4.1.\n\n\nResults\n\nAll piglets had IgG antibodies against PCV2 at weaning and there was no statistical difference between treatment groups within farms before vaccination (Table 1). However, at 3 weeks of age the anti-PCV2 IgG levels were higher in piglets from Farm A (VAC-A) than in piglets from Farm B (VAC-B) (p<0.05). The anti-PCV2 IgG response after vaccination was different between farms. In Farm A, IgG levels were high at 3 weeks of age and then decreased over time without significant difference in the average level of anti-PCV2 IgG between vaccinated and non-vaccinated pigs from Farm A (VAC-A) at each sampling event over time (Table 1, p>0.05). Additionally, the mean optical density values obtained from pigs in Farm A overtime demonstrated that there was no seroconversion (Figure 1A). Moreover, in Farm B IgG levels increased after vaccination until week 15 of age when they started to decrease, while non-vaccinated pigs from the same farm did not seroconvert (Figure 1B) and showed statistically lower IgG titers over time (p<0.05, Table 1) compared to vaccinated pigs within the same farm. Interestingly, none of the vaccinated or non-vaccinated pigs in this study had anti-PCV2 IgG levels greater than 0.41 after 23 weeks of age.\n\nMean IgG levels are compared within farm (vaccinated vs. non-vaccinated) and between farms (VAC-A vs. VAC-B). Different letters within farm indicate a significant difference (p<0.05) in the mean IgG level between vaccinated and non-vaccinated pigs. The significance level of difference in the IgG level between vaccinated pigs in farm A (VAC-A) and Farm B (VAC-B) by week are indicated with * (p<0.05) and ** (p<0.01). Pigs in Farm B were boosted at 5 weeks of age.\n\nSd: Standard deviation\n\nELISA and PCV2 viral load comparison between vaccinated and non-vaccinated pigs in Farm A (panel A) and Farm B (panel B). Bars indicate the mean IgG level in vaccinated (black) and non-vaccinated (grey) pigs at 3, 7, 11, 15, 19, and 23 weeks of age. Lines indicate the mean PCV2 viral load in vaccinated (black) and non-vaccinated (grey) pigs at 3, 7, 11, 15, 19, and 23 weeks of age. *p<0.05.\n\nAll serum samples from this study tested PCR positive for PCV2; however, none had a viral load greater than 104 DNA copies/ml (Figure 1A and B). Hence, all pigs were considered PCR positive, but with low viral loads, and therefore PMWS negative or asymptomatic during the study period. Additionally, there was no difference within farm in the viral load between vaccinated and non-vaccinated pigs and there was no difference found in the viral load between vaccinated pigs in Farm A (VAC-A) and vaccinated pigs in Farm B (VAC-B).\n\n\nDiscussion\n\nTo better understand the effect of PCV2 vaccination on the IgG response and PCV2 viral loads in pigs after weaning, we designed a longitudinal study and compared two different vaccination schemes under field conditions. We found that the PCV2 viral load in pigs after weaning was not associated to the vaccine scheme used in each farm studied. However, we found differences in the IgG levels between farms that could be associated with vaccination schemes. Understanding the effect of different vaccines and vaccine schemes on virus load and humoral response is important to design better health intervention to control PCV2 infection and minimize its effect on swine health and production.\n\nPCV2 vaccination has proven to control the effect of PCV2 infection on swine health and production (Cline et al., 2008; Horlen et al., 2008; Kixmöller et al., 2008) and there are different PCV2 vaccination schemes used in the contemporary swine industry. However, new PCV2 genotypes have been discovered (Davies et al., 2016; Xiao et al., 2015) and vaccine failure has been described (Fraile et al., 2015; Wang et al., 2009). In this study, we found low viral loads regardless of the vaccination scheme used in the farms studied. These findings were expected because vaccination can reduce the percentage of infectious pigs (Cline et al., 2008; Fachinger et al., 2008; Feng et al., 2014; Opriessnig et al., 2010). It is possible that viral loads remained low due to continuous vaccination of the herd regardless of the vaccination scheme. It was interesting to find that non-vaccinated animals maintained low viral loads within farms endemically infected with PCV2. We speculate that finding non-vaccinated pigs with low viral titers was the result of the overall herd immunity. This is in agreement with the findings by Feng et al. (Feng et al., 2014), in which mass vaccination against PCV2 reduced viral loads at the population level. Another explanation for vaccinated and non-vaccinated pigs with low viral loads is that there was no PCV2 circulating in the farm and that continuous vaccination of the populations has indeed minimized PCV2 infection between pigs.\n\nIn this study, we found differences in the humoral response between vaccinated pigs from Farm A and Farm B over time, mainly explained by the second dose (booster) used in piglets in Farm B and the vaccination schemes used in gilts and the sows. In our study, vaccination against PCV2 using two doses in piglets results in a higher antibody response than a single dose (p<0.05), even though in terms of protection the two options have shown to be effective and control PCV2 viremia (Lyoo et al., 2011). However, a single dose at 3 weeks of age might interfere with maternal antibodies as described before (Fort et al., 2009; Fraile et al., 2012; Martelli et al., 2011). In our study, pigs from Farm A showed higher levels of maternal derived antibodies at weaning, did not seroconvert after a single vaccination, and showed low PCV2 loads over time. Pesjak et al. (Pejsak et al., 2010) and Opriessnig et al. (Opriessnig et al., 2010), demonstrated that the presence of maternal-derived antibodies do not affect the efficacy of PCV2 subunit vaccines and proved low concentrations of viral DNA in serum after vaccination (as seen in our study), absence of histological lesions, and improvement in the productive parameters. Moreover, the different humoral immune response between vaccinated and non-vaccinated pigs in Farm B corresponded to a classical pattern of antibody response due to vaccination. Furthermore, it is the classical profile of humoral response after weaning without virus circulating. The humoral immune profile of piglets and sows is determined by PCV2 circulation, vaccination schemes, and is associated with virus load in pigs after weaning.\n\nFraile et al. (Fraile et al., 2015) defined four clusters of pigs based on PCV2 serological and PCR profiles. Cluster 1 is composed mainly by none vaccinated sows and none vaccinated pigs, in which viremic pigs are present with increasing antibody levels over time. Cluster 2 contains mostly vaccinated sows and non-vaccinated piglets in which late PCV2 infection and seroconversion is observed. Cluster 3 has mainly vaccinated sows and vaccinated pigs, viremia is rare and antibodies decrease over time; and cluster 4 is composed basically of non-vaccinated sows and vaccinated pigs in which infected animals are rare and high IPM titers are observed. Regardless of the vaccination scheme used in our study (Farm A versus B) all pigs met the criteria of cluster 3, rare viremia and antibody induction over time, even though not all sows were vaccinated (Farm B).\n\nThe present study contributes to the understanding of PCV2 infection and control under field conditions. However, it is important to keep in mind that we assumed that farms were endemic infected with PCV2 although high viral loads were never observed. Therefore, we could not test if there was an appropriate protection induced by the vaccines or minimal virus challenge. Additionally, our low sample size for the non-vaccinated control groups (n=10) might had been insufficient to detect viremic pigs under very low prevalence of the virus at the population level.\n\nVaccination is a key intervention to control the impact of PCV2 on swine health and production. Our findings illustrated that different vaccination schemes against PCV2 can maintain low viral load in endemically infected populations regardless of the different humoral immune profiles observed over time. These results are important because they can help to prevent PCV2 infections and minimize the effect of PCVAD on swine health and production. Future studies are required to understand the epidemiology of PCV2 infection in positive farms with very low prevalence of PCV2 infections.\n\n\nEthical statement\n\nThe farms included in the study are associated with Pork Colombia and follow the guidelines of production, biosecurity and animal welfare required by this institution. Approval was requested from the farms where the study was conducted and they agreed to its completion. The veterinarians of each farm supervised and collaborated with the study. The Bioethics Committee of the Faculty of Veterinary Medicine and Animal Sciences of the National University of Colombia approved the procedures performed on the pigs (resolution OF-CBE-FMVZ-0006-10).\n\nEvery effort was made to reduce the suffering of the pigs to a minimum. Veterinarians trained in this procedure took the blood samples and the pigs were monitored for one hour after taking the sample to control for any adverse effects on the procedure.\n\n\nData availability\n\nDataset 1: Data of the results obtained in the study. The data obtained and analysed for the ELISA and qPCRy tests are available in an attached document where are classified by farms. Likewise, the results of the negative controls used are included. DOI, 10.5256/f1000research.13160.d188246 (Vargas-Bermudez et al., 2017).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis research was financed by Pork Colombia (former Colombian Association of Pig Farmers - Asoporcicultores) and by the Colombian National Fund for Pig Industry (FNP).\n\n\nAcknowledgements\n\nThe authors want to express their gratitude to Pork Colombia for the financial support granted to this study, To MVs Arnold Mora and Eduardo Vargas for their kind collaboration, and Dr. Carl A. Gagnon, (Swine and poultry infectious diseases research center -CRIPA, Université de Montréal, St-Hyacinthe, Québec, Canada).\n\n\nReferences\n\nAllan G, Krakowka S, Ellis J, et al.: Discovery and evolving history of two genetically related but phenotypically different viruses, porcine circoviruses 1 and 2. Virus Res. 2012; 164(1–2): 4–9. PubMed Abstract | Publisher Full Text\n\nBeach NM, Meng XJ: Efficacy and future prospects of commercially available and experimental vaccines against porcine circovirus type 2 (PCV2). Virus Res. 2012; 164(1–2): 33–42. PubMed Abstract | Publisher Full Text\n\nCline G, Wilt V, Diaz E, et al.: Efficacy of immunising pigs against porcine circovirus type 2 at three or six weeks of age. Vet Rec. 2008; 163(25): 737–740. PubMed Abstract\n\nDavies B, Wang X, Dvorak CM, et al.: Diagnostic phylogenetics reveals a new Porcine circovirus 2 cluster. Virus Res. 2016; 217: 32–37. PubMed Abstract | Publisher Full Text\n\nDupont K, Nielsen EO, Baekbo P, et al.: Genomic analysis of PCV2 isolates from Danish archives and a current PMWS case-control study supports a shift in genotypes with time. Vet Microbiol. 2008; 128(1–2): 56–64. PubMed Abstract | Publisher Full Text\n\nFachinger V, Bischoff R, Jedidia SB, et al.: The effect of vaccination against porcine circovirus type 2 in pigs suffering from porcine respiratory disease complex. Vaccine. 2008; 26(11): 1488–1499. PubMed Abstract | Publisher Full Text\n\nFenaux M, Opriessnig T, Halbur PG, et al.: A chimeric porcine circovirus (PCV) with the immunogenic capsid gene of the pathogenic PCV type 2 (PCV2) cloned into the genomic backbone of the nonpathogenic PCV1 induces protective immunity against PCV2 infection in pigs. J Virol. 2004; 78(12): 6297–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeng H, Blanco G, Segalés J, et al.: Can Porcine circovirus type 2 (PCV2) infection be eradicated by mass vaccination? Vet Microbiol. 2014; 172(1–2): 92–99. PubMed Abstract | Publisher Full Text\n\nFort M, Sibila M, Allepuz A, et al.: Porcine circovirus type 2 (PCV2) vaccination of conventional pigs prevents viremia against PCV2 isolates of different genotypes and geographic origins. Vaccine. 2008; 26(8): 1063–1071. PubMed Abstract | Publisher Full Text\n\nFort M, Sibila M, Pérez-Martín E, et al.: One dose of a porcine circovirus 2 (PCV2) sub-unit vaccine administered to 3-week-old conventional piglets elicits cell-mediated immunity and significantly reduces PCV2 viremia in an experimental model. Vaccine. 2009; 27(30): 4031–4037. PubMed Abstract | Publisher Full Text\n\nFraile L, Segalés J, Ticó G, et al.: Virological and serological characterization of vaccinated and non-vaccinated piglet subpopulations coming from vaccinated and non-vaccinated sows. Prev Vet Med. 2015; 119(3–4): 153–161. PubMed Abstract | Publisher Full Text\n\nFraile L, Sibila M, Nofrarías M, et al.: Effect of sow and piglet porcine circovirus type 2 (PCV2) vaccination on piglet mortality, viraemia, antibody titre and production parameters. Vet Microbiol. 2012; 161(1–2): 229–234. PubMed Abstract | Publisher Full Text\n\nFranzo G, Cortey M, de Castro AM, et al.: Genetic characterisation of Porcine circovirus type 2 (PCV2) strains from feral pigs in the Brazilian Pantanal: An opportunity to reconstruct the history of PCV2 evolution. Vet Microbiol. 2015a; 178(1–2): 158–162. PubMed Abstract | Publisher Full Text\n\nFranzo G, Tucciarone CM, Dotto G, et al.: International trades, local spread and viral evolution: the case of porcine circovirus type 2 (PCV2) strains heterogeneity in Italy. Infect Genet Evol. 2015b; 32: 409–415. PubMed Abstract | Publisher Full Text\n\nGerber PF, Garrocho FM, Lana AM, et al.: Serum antibodies and shedding of infectious porcine circovirus 2 into colostrum and milk of vaccinated and unvaccinated naturally infected sows. Vet J. 2011; 188(2): 240–242. PubMed Abstract | Publisher Full Text\n\nGuo LJ, Lu YH, Wei YW, et al.: Porcine circovirus type 2 (PCV2): genetic variation and newly emerging genotypes in China. Virol J. 2010; 7: 273. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHe J, Cao J, Zhou N, et al.: Identification and Functional Analysis of the Novel ORF4 Protein Encoded by Porcine Circovirus Type 2. J Virol. 2013; 87(3): 1420–1429. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHorlen KP, Dritz SS, Nietfeld JC, et al.: A field evaluation of mortality rate and growth performance in pigs vaccinated against porcine circovirus type 2. J Am Vet Med Assoc. 2008; 232(6): 906–912. PubMed Abstract | Publisher Full Text\n\nJacela JY, Dritz SS, DeRouchey JM, et al.: Field evaluation of the effects of a porcine circovirus type 2 vaccine on finishing pig growth performance, carcass characteristics, and mortality rate in a herd with a history of porcine circovirus-associated disease. J Swine Health Prod. 2011; 19(1): 10–18. Reference Source\n\nKixmöller M, Ritzmann M, Eddicks M, et al.: Reduction of PMWS-associated clinical signs and co-infections by vaccination against PCV2. Vaccine. 2008; 26(27–28): 3443–3451. PubMed Abstract | Publisher Full Text\n\nLiu J, Chen I, Kwang J: Characterization of a Previously Unidentified Viral Protein in Porcine Circovirus Type 2-Infected Cells and Its Role in Virus-Induced Apoptosis. J Virol. 2005; 79(13): 8262–8274. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLyoo K, Joo J, Caldwell B, et al.: Comparative efficacy of three commercial PCV2 vaccines in conventionally reared pigs. Vet J. 2011; 189(1): 58–62. PubMed Abstract | Publisher Full Text\n\nMartelli P, Ferrari L, Morganti M, et al.: One dose of a porcine circovirus 2 subunit vaccine induces humoral and cell-mediated immunity and protects against porcine circovirus-associated disease under field conditions. Vet Microbiol. 2011; 149(3–4): 339–351. PubMed Abstract | Publisher Full Text\n\nOlvera A, Sibila M, Calsamiglia M, et al.: Comparison of porcine circovirus type 2 load in serum quantified by a real time PCR in postweaning multisystemic wasting syndrome and porcine dermatitis and nephropathy syndrome naturally affected pigs. J Virol Methods. 2004; 117(1): 75–80. PubMed Abstract | Publisher Full Text\n\nOpriessnig T, Patterson AR, Madson DM, et al.: Comparison of the effectiveness of passive (dam) versus active (piglet) immunization against porcine circovirus type 2 (PCV2) and impact of passively derived PCV2 vaccine-induced immunity on vaccination. Vet Microbiol. 2010; 142(3–4): 177–183. PubMed Abstract | Publisher Full Text\n\nOpriessnig T, Gerber PF, Xiao CT, et al.: Commercial PCV2a-based vaccines are effective in protecting naturally PCV2b-infected finisher pigs against experimental challenge with a 2012 mutant PCV2. Vaccine. 2014; 32(34): 4342–4348. PubMed Abstract | Publisher Full Text\n\nOpriessnig T, Xiao CT, Gerber PF, et al.: Emergence of a novel mutant PCV2b variant associated with clinical PCVAD in two vaccinated pig farms in the U.S. concurrently infected with PPV2. Vet Microbiol. 2013; 163(1–2): 177–183. PubMed Abstract | Publisher Full Text\n\nPark C, Seo HW, Han K, et al.: Comparison of four commercial one-dose porcine circovirus type 2 (PCV2) vaccines administered to pigs challenged with PCV2 and porcine reproductive and respiratory syndrome virus at 17 weeks postvaccination to control porcine respiratory disease complex under Korean field conditions. Clin Vaccine Immunol. 2014; 21(3): 399–406. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPejsak Z, Podgórska K, Truszczynski M, et al.: Efficacy of different protocols of vaccination against porcine circovirus type 2 (PCV2) in a farm affected by postweaning multisystemic wasting syndrome (PMWS). Comp Immunol Microbiol Infect Dis. 2010; 33(6): e1–e5. PubMed Abstract | Publisher Full Text\n\nRincón Monroy MA, Ramirez-Nieto GC, Vera VJ, et al.: Detection and molecular characterization of porcine circovirus type 2 from piglets with porcine circovirus associated diseases in Colombia. Virol J. 2014; 11: 143. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSegalés J: Porcine circovirus type 2 (PCV2) infections: Clinical signs, pathology and laboratory diagnosis. Virus Res. 2012; 164(1–2): 10–19. PubMed Abstract | Publisher Full Text\n\nSegalés J: Best practice and future challenges for vaccination against porcine circovirus type 2. Expert Rev Vaccines. 2015; 14(3): 473–487. PubMed Abstract | Publisher Full Text\n\nShen H, Wang C, Madson DM, et al.: High prevalence of porcine circovirus viremia in newborn piglets in five clinically normal swine breeding herds in North America. Prev Vet Med. 2010; 97(3–4): 228–236. PubMed Abstract | Publisher Full Text\n\nTakahagi Y, Toki S, Nishiyama Y, et al.: Differential effects of porcine circovirus type 2 (PCV2) vaccination on PCV2 genotypes at Japanese pig farms. J Vet Med Sci. 2010; 72(1): 35–41. PubMed Abstract | Publisher Full Text\n\nTrible BR, Rowland RR: Genetic variation of porcine circovirus type 2 (PCV2) and its relevance to vaccination, pathogenesis and diagnosis. Virus Res. 2012; 164(1–2): 68–77. PubMed Abstract | Publisher Full Text\n\nVargas-Bermudez DS, Díaz A, Mogollón JD, et al.: Dataset 1 in: Longitudinal comparison of the humoral immune response and viral load of Porcine Circovirus Type 2 in pigs with different vaccination schemes under field conditions. F1000Research. 2017. Data Source\n\nWang F, Guo X, Ge X, et al.: Genetic variation analysis of Chinese strains of porcine circovirus type 2. Virus Res. 2009; 145(1): 151–156. PubMed Abstract | Publisher Full Text\n\nXiao CT, Halbur PG, Opriessnig T: Global molecular genetic analysis of porcine circovirus type 2 (PCV2) sequences confirms the presence of four main PCV2 genotypes and reveals a rapid increase of PCV2d. J Gen Virol. 2015; 96(Pt 7): 1830–1841. PubMed Abstract | Publisher Full Text\n\nZhai SL, Chen SN, Wei ZZ, et al.: Co-existence of multiple strains of porcine circovirus type 2 in the same pig from China. Virol J. 2011; 8: 517. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "30289",
"date": "08 Feb 2018",
"name": "Alvaro Rafael Ruiz-Garrido",
"expertise": [
"Reviewer Expertise Swine infection diseases and their effect in swine intensive production systems."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI think that the paper is very good, and should be published, but there are somethings that must be improved before, as follows:\n•\n\nAbstract The authors should review the conclusions, because they did not observe statistical differences between the vaccinated and unvaccinated groups, in both farms, regarding to the viral loads. They just studied 2 variables (humoral response and viral load) and they only found that the humoral response is different depending of the vaccine used.\n•\n\nMethods The authors should explain better the differences that exist between both farms, from the infrastructural, animal’s flow and management point of view. It is lamentable that the authors do not have a replicate of the treatment in each farm, in order to give more power to the results.\n•\n\nStatistical analysis The authors should use a test of repeated samples ANOVA, if the assumptions allow it.\n•\n\nResults The authors should review the tables, because there are some differences in the results that are possible to obtain from the original data and the ones showed in tables (mean and standard deviation) and figures (mean) of the paper.\nIn Figure 1, the authors should use the same scale for the ELISA results of farm A and B. Additionally, they have to indicate if the * indicate statistical differences between vaccinated and unvaccinated animals of farm B or statistical differences between the different weeks of sampling of vaccinated animals of farm B.\n•\n\nDiscussion The authors should discuss the phrase “However, at 3 weeks of age the anti-PCV2 IgG levels were higher in piglets from Farm A (VAC-A) than in piglets from Farm B (VAC-B) (p<0.05).” that it is in the results, and the implication that the farms were different from the beginning, and how this can influence the results that they obtain.\nAs Figure 1 shows, the authors should give explanations for the serology decay of the vaccinated animals in farm B at the 19 and 23 weeks. Additionally, the authors should discuses why there is an increase, no statistical significance, in 3 of the 4 groups, at weeks 23, of the mean PCV2 DNA load, as show by Figure 1.\nThere is a little of over conclusions, since it states that “In our study, vaccination against PCV2 using two doses in pig- lets results in a higher antibody response than a single dose (p<0.05), even though in terms of protection the two options have shown to be effective and control PCV2 viremia”; the authors cannot affirm this because there were no differences in viral load between control (un vaccinated) and vaccinated group.\nThe authors should indicate what is the meaning of IPM in the clusters 4.\nThe authors should review the sentence “Our findings illustrated that different vaccination schemes against PCV2 can maintain low viral load in endemically infected populations regardless of the different humoral immune profiles observed over time\". The authors did not observe statistic differences between the control group and the vaccinated group, so they can not affirm the above sentence.\nIt could have been interesting if the authors had measured the productive parameters of the treatment groups, to see any difference, but with the reduced number of animals (especially the control group) this was not possible.\nThe authors should discus the differences between farms (including the Ig G level at the beginning of the essay), vaccine used, vaccine protocols and epidemiology of PCV2 at the population level and there effect on they results.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3916",
"date": "29 Aug 2018",
"name": "Jairo Jaime",
"role": "Author Response",
"response": "2. Alvaro Ruíz Garrido 2.1. Abstract 2.1.1. Include in the abstract the variables handled (humoral response and viral load) indicating that there are only changes in the immune response. In the Abstract, the paragraph of methods says what the reviewer is requesting: We designed a longitudinal study and measured IgG levels by ELISA and virus load by quantitative PCR in pigs after weaning. In the abstract also, in the paragraph of results, the following sentence was added: We found that low virus loads were maintained in pigs from both farms, regardless of the vaccination scheme used (p> 0.05). However, IgG levels were observed over time (p <0.05) while no significant differences were found in viral loads. This suggests that different humoral immune response is not associated with different virus loads observed over time. 2.2. Methods: 2.2.1. Indicate the differences between farms in terms of infrastructure, animal flow and management. The sentence was rectified as follows: For this study two commercial pig farms in Colombia (Farm A and B), endemically infected with PCV2, were conveniently selected. Farm A was 500-sow farrow-to-finish but with the nursery all in all out but it is close to site1 and site 3 is distant with continuous flow management. Farm B was 250-sow farrow-to-wean farm, with two additional sites for the nursery and finishing stages of production. Sites 1, 2 and 3 are geographically distant and the nursery or site two is all in all out. Site three is distant from site two but management is in continuous flow. 2.2.2. In the statistical analysis because an ANOVA test of repeated samples was not done. We decided to not use ANOVA because the data was not normally distributed and it was not independent at all (vaccinated and not vaccinated animals were in the same farm). Hence we used a non parametric test to compare the mean of to correlated samples. 2.3. Results 2.3.1. Review the tables since there is a difference of results between the original data and those of the tables (SD / media) The reviewer is correct and the mean and SD for animals in farm A in Table 2 are incorrect. The table has been updated with the right values for each group. However the statistical difference noted within and between farms are correct. 2.3.2. In figure 1 use the same scale for the ELISA. The * indicates differences between vaccinates (farm B) or statistical differences over time. In Figure 1, different scales were used so that the reader can better visualize the differences, however it is considered that the reviewer is right and the two graphs with the same scale were adjusted. The * indicates difference between vaccinated and unvaccinated in each evaluated week, not in time. This is clarified in the legend of the figure. 2.4.3. In paragraph 2, the sentence: However, at 3 weeks of age the anti-PCV2 IgG levels were higher in piglets from Farm A (VAC-A) than in piglets from Farm B (VAC-B) (p <0.05). What implications does this have from the beginning and how can it influence the results? The presence of higher levels of antibodies in farm A both in vaccinated and unvaccinated pigs at the beginning of the experiment (week 3) could be indicating that there was a greater transmission of maternal antibodies compared to farm B, without any of the farms this difference will affect the viral load. 2.2.4. In Figure 1. Why there is a decline in the serology of vaccinated pigs in farm B at 19 and 23 weeks. It should be analyzed why there is an increase, without statistical significance, in 3 of the 4 groups in week 23 of the average PCV2 DNA load? The decay of antibodies at weeks 19 and 23 in farm B in the vaccinated group without modification of the viral load would indicate that the antibodies detectable by ELISA have been metabolized, but do not show that the pigs have lost protection. By week 23, an increase in viral loads (without statistical significance) could be established, which could indicate that viral loads were increasing while antibodies were decreasing and if the pigs were kept longer it is likely that viral loads increased to levels of risk. (This is better in the discussion) 2.3. Discussion 2.3.1. In paragraph 3 it cannot be stated: In this study, we found differences in the humoral response between vaccinated pigs from Farm A and Farm B over time mainly explained by the second dose (booster) used in piglets in farm B and the vaccination schemes used in gilts and sows. This correction coincides with 1.3.2. Mike Murtaughg 1.3.2. In paragraph 3, you are right that it cannot be said that: In this study, we found differences in the humoral response between vaccinated pigs from Farm A and Farm B over time mainly explained by the second dose (booster) used in piglets in farm B and the vaccination schemes used in gilts and sows. The sentence was rectified as follows: In this study, we found differences in the humoral response between vaccinated pigs from Farm A and Farm B over time. This result can probably be explained by the second dose (booster) used in piglets in Farm B, although this cannot be concluded from the results obtained since the levels of neutralizing antibodies were not evaluated. Studies show that vaccination against PCV2 does not necessarily stimulate capsid-specific antibodies but does seem to be involved in the increase of neutralizing antibodies (Dvorak et al., 2018). 2.3.2. What does IPM mean? It was corrected by IMPA (immunoperoxidase monolayer assay) 2.2.7. Paragraph 6 states: Our findings illustrated that different vaccination schemes against PCV2 can maintain low viral load in endemically infected populations, regardless of the different humoral immune profiles observed over time. This statement is not correct since the unvaccinated group maintained low viral loads. This statement was corrected as follows: Our findings illustrated that, regardless of the vaccination scheme used, low viral loads of PCV2 were maintained, although a similar response was found in the unvaccinated group. This could indicate that when a farm has a vaccination program established some time ago, it can contribute to the control of the virus. This can probably be explained by the presence of neutralizing antibodies in the control group that were not detected by the ELISA test. 2.3.3. The authors should discuss the differences between farms (including the IgG level at the beginning of the trial), the vaccine used, the vaccine protocols and the epidemiology of PCV2 at the population level and its effect on the results. It is likely that the level of neutralizing antibodies in both farms was sufficient to control the virus. The implication of these initial levels of antibodies in the results is not clear and could be on the response of the same. In farm A, it was able to influence the levels of antibodies to remain low throughout the experiment, while in farm B he was able to increase them. The foregoing is explained by the consumption of the antibodies against the vaccine challenge at 3 weeks (farm A)."
}
]
},
{
"id": "30258",
"date": "09 Feb 2018",
"name": "Michael P. Murtaugh",
"expertise": [
"Reviewer Expertise Viral immunology",
"veterinary immunology",
"molecular virology",
"phylogenetics",
"viral evolution",
"animal infectious diseases"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors evaluated the effects of moderately different vaccination schemes on PCV2, concluding there was no substantial difference in the various vaccination schemes. The study is narrow in scope, having used two independent farms for the main treatment difference, and a different vaccine on each farm. These confounding factors limit the generalizability of the findings, which the authors are aware of. It is a common limitation in field studies that is a bit compensated for by the direct applicability of the findings. The lack of replication and the small sample sized are additional limitations that the authors acknowledge.\n\nIt was a bit surprising that the sow herds were either fully vaccinated or entering gilts were vaccinated. This practice is to my knowledge uncommon in North America since it is obvious that capsid vaccines used only in the farrowing room and maybe the nursery are highly effective in preventing PCVAD in growing pigs. It also is common knowledge that the level of PCV2 viremia in finishing pigs has been reduced tremendously since vaccines were introduced in North America in 2006 (Dvorak et al. National reduction in porcine circovirus type 2 prevalence following introduction of vaccination. Vet Micro 189 [2016] 86-90). For this reason, much of the preceding published literature may be out of date and not relevant to the present situation.\n\nThere is a wide variation in quality in the published literature that the authors might want to evaluate. Phylogenetics studies with hundreds or thousands of sequences (e.g. Davies. [2016] Diagnostic phylogenetics reveals a new porcine circovirus 2 cluster. Virus Res. 217:32-37) are superior to reports with tens of sequences which often miss important variants due to random chance. The authors also should refer to original and primary reports rather than reviews in citing new discoveries.\n\nMinor comments: Methods. Farms and sample selection, 1st paragraph – The phrase \"vaccinated on a weekly basis\" is confusing. Perhaps the phrase could be deleted or the vaccination scheme explained a bit more.\nDiscussion. It is incorrect to say that no PCV2 was circulating in the farm, since there were PCV2 positive animals in all treatment groups. It also is clear that PCV2 is environmentally stable, providing for numerous sources of infection in the environment of the pigs (Dvorak et al. [2013] Multiple routes of porcine circovirus type 2 transmission to piglets in the presence of maternal immunity. Vet. Microbiol. 166:365-374.)\n\nDiscussion. Third paragraph – “In this study, we found differences in the humoral response between vaccinated pigs from Farm A and Farm B over time, mainly explained by the second dose (booster) used in piglets in Farm B and the vaccination schemes used in gilts and the sows.” Because of the confounding factors present in the experimental design it really is not possible to conclude that the booster was the reason for the differences in humoral response. Interestingly, it has been shown previously that vaccination does not necessarily boost capsid-specific antibodies, but does seem to be involved in increasing neutralizing antibody titers. Perhaps, this is dependent upon the vaccine used, which would agree your data. (Dvorak et al. Effect of Maternal Antibody Transfer on Antibody Dynamics and Control of Porcine Circovirus Type 2 Infection in Offspring. Viral Immunology [2018] 31: 40-46.)\n\nDiscussion. Fourth paragraph – “Fraile et al….”I think you mean non-vaccinated not “none vaccinated”.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3915",
"date": "29 Aug 2018",
"name": "Jairo Jaime",
"role": "Author Response",
"response": "1. Mike Murtaughg 1.1. Introduction: 1.1.1. It has the reason that it has been shown that the level of viremia has decreased in North America using vaccination, particularly in the United States. Correction was made, thus leaving the phrase: PCVAD prevention is mainly based on vaccination against PCV2 infections (Fort et al., 2009; Feng et al., 2014), which has led to a decrease in the prevalence of the virus and viremia levels (Dvorak et al., 2016). 1.1.2. It has the reason that phylogeny should be compared with papers that support analysis of hundreds or thousands of sequences to obtain clearer clusters. In this case, the bibliography was corrected in the first paragraph of the introduction. 1.2. Methods: 1.2.1. The phrase ... at 3 and 5 weeks of age on a weekly basis. It was eliminated on a weekly basis. 1.3. Discussion: 1.3.1. In Paragraph 2, you are right in stating that: Another explanation for vaccinated and non-vaccinated pigs with low viral loads is that there was no PCV2 circulating in the farm and that continuous vaccination of the populations has indeed minimized PCV2 infection between pigs. Evidently there was PCV2 circulation. The phrase was rectified and changed by: The presence of low viral loads in both vaccinated and unvaccinated pigs shows that the virus is circulating. Studies have shown that PCV2 is very stable in the environment, causing numerous routes of infection and that piglets can also be infected in the presence of maternal immunity (Dvorak et al., 2013). 1.3.2. In paragraph 3, you are right that it cannot be said that: In this study, we found differences in the humoral response between vaccinated pigs from Farm A and Farm B over time mainly explained by the second dose (booster) used in piglets in farm B and the vaccination schemes used in gilts and sows. The sentence was rectified as follows: In this study, we found differences in the humoral response between vaccinated pigs from Farm A and Farm B over time. This result can probably be explained by the second dose (booster) used in piglets in Farm B, although this cannot be concluded from the results obtained since the levels of neutralizing antibodies were not evaluated. Studies show that vaccination against PCV2 does not necessarily stimulate capsid-specific antibodies but does seem to be involved in the increase of neutralizing antibodies (Dvorak et al., 2018)."
}
]
},
{
"id": "30290",
"date": "13 Feb 2018",
"name": "Jesús Hernandez",
"expertise": [
"Reviewer Expertise Viral immunology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Vargas-Bermudez et al. evaluated humoral response (IgG) and viremia in pigs from farms with two different vaccination programs. The authors concluded that viral loads were low regardless the vaccination schemes. The manuscript is well written, clear and provides interesting information.\n\nMajor concerns:\nConclusions\nThere are statements that are not supported by the results and have to be modified.\n\n“Another explanation for vaccinated and non-vaccinated pigs with low viral loads is that there was no PCV2 circulating in the farm and that continuous vaccination of the populations has indeed minimized PCV2 infection between pigs”.\nPositive PCR indicate that PCV2 is circulating in the farm.\n\n“Our findings illustrated that different vaccination schemes against PCV2 can maintain low viral load in endemically infected populations regardless of the different humoral immune profiles observed over time”.\nThis statement is not correct, because non-vaccinated group maintained low viral loads.\n\nMinor concerns:\n\nResults\nCan you include the IgG values of week 5?\n\nIt is not clear the weaning age. It is week 3?\n\nThe inclusion of vaccination time in the table 1 could help in the interpretation.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3917",
"date": "29 Aug 2018",
"name": "Jairo Jaime",
"role": "Author Response",
"response": "3. Jesus Hernandez: 3.1. Discussion. 3.1.1. In discussion, paragraph 2, you are correct in stating that: Another explanation for vaccinated and non-vaccinated pigs with low viral loads is that there was no PCV2 circulating in the farm and that continuous vaccination of the populations have indeed minimized PCV2 infection between pigs. This correction coincides with 1.3.1. Mike Murtaughg 1.3.1. In Paragraph 2, you are right in stating that: Another explanation for vaccinated and non-vaccinated pigs with low viral loads is that there was no PCV2 circulating in the farm and that continuous vaccination of the populations has indeed minimized PCV2 infection between pigs. Evidently there was PCV2 circulation. The phrase was rectified and changed by: The presence of low viral loads in both vaccinated and unvaccinated pigs shows that the virus is circulating. Studies have shown that PCV2 is very stable in the environment, causing numerous routes of infection and that piglets can also be infected in the presence of maternal immunity (Dvorak et al., 2013). 3.1.2. In discussion, paragraph 6 states: Our findings illustrated that different vaccination schemes against PCV2 can maintain low viral load in endemically infected populations, regardless of the different humoral immune profiles observed over time. This correction coincides with 2.2.7. (Alvaro Ruíz) 2.2.7. Paragraph 6 states: Our findings illustrated that different vaccination schemes against PCV2 can maintain low viral load in endemically infected populations, regardless of the different humoral immune profiles observed over time. This statement is not correct since the unvaccinated group maintained low viral loads. This statement was corrected as follows: Our findings illustrated that, regardless of the vaccination scheme used, low viral loads of PCV2 were maintained, although a similar response was found in the unvaccinated group. This could indicate that when a farm has a vaccination program established some time ago, it can contribute to the control of the virus. This can probably be explained by the presence of neutralizing antibodies in the control group that were not detected by the ELISA test. 3.2. Results: 3.2.1. The IgG values can be included in week 5? It was not evaluated in that week (which was when the booster was made (second dose) in farm B. 3.2.2. It is not clear the week of weaning, was week 3? Yes it was in week 3, this was added in the text in methods (paragraph 2). 3.2.3. The inclusion of the vaccination time in table 1 could help the interpretation. This recommendation is shared and included in the text of the table."
}
]
}
] | 1
|
https://f1000research.com/articles/7-42
|
https://f1000research.com/articles/7-1339/v1
|
24 Aug 18
|
{
"type": "Research Article",
"title": "Using Lamin B1 mRNA for the early diagnosis of hepatocellular carcinoma: a cross-sectional diagnostic accuracy study",
"authors": [
"Amani M. Abdelghany",
"Nasser Sadek Rezk",
"Mona Mostafa Osman",
"Amira I. Hamid",
"Ashraf Mohammad Al-Breedy",
"Hoda A. Abdelsattar",
"Amani M. Abdelghany",
"Nasser Sadek Rezk",
"Mona Mostafa Osman",
"Amira I. Hamid",
"Ashraf Mohammad Al-Breedy"
],
"abstract": "Background: Hepatocellular carcinoma (HCC) is vital medical issue in Egypt. It accounts for 70.48% of all liver tumors among Egyptians. The aim of this study was to determine the diagnostic role of plasma levels of mRNA of lamin B1 by RT-qPCR as an early marker of HCC. Methods: This study was conducted at the Clinical Pathology Department in collaboration with the Department of Tropical Medicine and Infectious Diseases at Ain Shams University Hospitals. It included 30 patients with primary HCC and viral cirrhosis (all were hepatitis C virus-positive) (Group I), in addition to 10 patients with chronic liver diseases (Group II) and 10 healthy age- and sex-matched subjects (Group III). Group I was further classified according to the Barcelona-Clinic Liver Cancer Staging System. Serum α-fetoprotein (AFP) chemiluminescent-immunoassays and RT-qPCR analysis of plasma lamin B1 mRNA levels were performed for all participants. Results: AFP and lamin B1 significantly elevated in patients with HCC compared to those in the other studied groups. AFP and lamin B1 status could discriminate group I from group II and III. A significant increase was found among the three Barcelona stages with regards to AFP and lamin B1 levels. A significant decrease was found between group II and stage 0, A and B with regards to AFP and lamin B1. Lamin B1 and AFP could both differentiate HCC patients with one tumor nodule (T1) from those with two or more tumor nodules (T2&Tm), as well as between those with tumor sizes >3 cm and ≤3 cm. Conclusion: Measurement of lamin B1 mRNA is recommended in patients with chronic liver disease with normal serum AFP, especially in known cirrhotic patients that deteriorate rapidly without any apparent etiology.",
"keywords": [
"Hepatocellular carcinoma",
"lamin B1",
"AFP",
"RT-qPCR"
],
"content": "Introduction\n\nHepatocellular carcinoma (HCC) is the most common primary malignancy of hepatocytes, representing the fifth most common cancer, worldwide1. HCC is the third-leading cause of cancer-related deaths worldwide, accounting for approximately 1 million deaths annually2. Infection with hepatitis C virus (HCV) and/or hepatitis B virus contributes to >60% of HCC cases. HCC is a large problem in low- and middle-income countries and regions3. In Egypt, HCC accounts for 4.7% of all liver diseases. The relative frequency of all liver cancers in Egypt reached 7.3% in 2003 with 95% of them are HCC cases4.\n\nThe nuclear lamina is a proteinaceous meshwork found below the inner nuclear membrane, composing of intermediate filament proteins: lamin A, lamin C and lamins B1 and B2. Lamins are essential for a number of cellular functions, including nuclear stability, chromatin structure and gene expressions5. B-type lamins are widely expressed in most cell types, including embryonic stem cells6. Lamin B1 was found to be essential for nuclear integrity, cell survival and normal development7. However, neither cell proliferation nor skin and hair development were affected by genetic knockout of lamin B1 in keratinocytes8. Moreover, mouse embryonic stem cells do not require any lamins for self-renewal and pluripotency9.\n\nLamin B1 was found to be down regulated in bronchogenic carcinoma, colorectal cancer, and gastric cancer. On the contrary lamin B1 was elevated in prostate cancer10. Thus, the lamin B1 role in both physiology and cancer biology are vague11.\n\nThe present study aimed to determine the diagnostic and prognostic role of plasma levels of mRNA of lamin B1 by RT-qPCR as an early marker of HCC, in comparison to the traditional parameters of α-fetoprotein (AFP) levels, and ultrasound and triphasic computed tomography (CT) imaging. The study also aimed to reveal the correlation between lamin B1 mRNA levels with the tumor size and the number of tumor nodules, represented using the Barcelona-Clinic Liver Cancer (BCLC) Staging System12, as well as to serum AFP level.\n\n\nMethods\n\nThis prospective cross-sectional pilot study was approved by the Research and Ethics Committee of Ain Shams University, Cairo, Egypt. Each patient provided verbal consent for the inclusion of their data in this study. Verbal consent was obtained over written consent, since some patients were illiterate (the ethical committee approved this form of consent).\n\nThis study was conducted at the Clinical Pathology Department in collaboration with the Department of Tropical Medicine and Infectious Diseases at Ain Shams University Hospitals from May 2016 to January 2017. The patients were consecutively recruited from the Tropical Medicine Department at Ain Shams University Hospitals and HCC clinic. The patients included 30 patients with primary HCC on top of viral cirrhosis (all were HCV positive). There were 27 males and 3 females (age range, 47–69 years), in addition to ten 10 cases with chronic liver diseases (7 males and 3 females; age range, 30–70 years) as patient controls and 10 healthy subjects age- and sex-matched subjects (7 males and 3 females; age range, 31–60 years), serving as healthy controls. HCC patients were classified into 3 stages using the BCLC Staging System: very early (Stage 0; n = 14), who had a single tumor <2 cm with child class A (according to the Child-Pugh system; 13) and performance status 0; early (Stage A; n=7), who had single or maximum 3 tumor nodules < 3 cm with child class A–B and performance status 0; and intermediate (Stage B; n=9), who had multinodular tumor child class A–B and performance status 0.\n\nThe diagnosis of HCC was based on non-invasive imaging techniques; either triphasic multidetector CT scan or dynamic contrast-enhanced magnetic resonance imaging, according to AASLD guidelines14, and were only performed for cirrhotic patients. For patients with hepatic nodules beyond 1 cm in diameter, one imaging technique was required, while in those patients with smaller lesions, both techniques were performed for confirmation. Pathological diagnosis was performed for selected lesions, in which imaging studies did not demonstrate the typical HCC criteria.\n\nSubjects with malignancies other than HCC, autoimmune diseases, chronic liver diseases other than viral hepatitis, benign liver tumors or secondary (metastatic) liver tumors and BCLC stage C or D disease were excluded from the study.\n\nAll individuals included in this study were subjected to a full assessment of medical history, focusing on previous hepatic disorders or predisposing factors preceding liver disease, thorough clinical examination, with special emphasis on abdominal examination, jaundice, edema and ascites, radiological investigations (including CT scan (for HCC patients only), abdominal ultrasound for patients with hepatic disorders and normal controls) Serum AFP was assayed by chemiluminescent-immunometric technique and plasma mRNA lamin B1 quantitated by RT-qPCR.\n\nA total of 4 ml venous blood was withdrawn from each subject; 2 ml were collected in EDTA K3 vacutainers for the lamin B1 assay and centrifuged at 1500g for 10 minutes. Plasma was collected, aliquoted and stored at −70°C. The remaining 2 ml were collected in sterile vacutainers with a Z Serum Sep Clot Activator (Greiner Bio-One). Afterwards, blood was centrifuged for 10 min at 1000g, the serum was used for immediate analysis of AFP.\n\nAFP was assayed by electro-chemiluminescence on a Cobas e411 immunoassay autoanalyzer (Roche Diagnostic Gmbh), using the AFP α1-fetoprotein immunoassay kit also provided by Roche.\n\nRT-qPCR was performed through several steps, as follows. (i) RNA extraction from the EDTA-K3 plasma samples was performed using a ready-made extraction kit (miRNeasy Mini Kit) supplied by Qiagen, Inc. (ii) Real-time RT and cDNA synthesis was conducted using the extracted RNA with the QuantiTest reverse transcription kit (Qiagen, Inc.). Using these reagents, only mRNAs with 3'-poly (A) tails are templates for cDNA synthesis. cDNA synthesized with this system can be used as a template in the PCR reaction. (iii) DNA was amplified and detected by qPCR, using RT-PCR Master Mix kit supplied by Qiagen, Inc. Amplification was performed using the real-time light cycler Stratagene Mx3005P (Agilent Technologies, Inc.). qPCR was performed according to the following protocol: 5 min at 95°C (PCR initial activation step), followed by 45 cycles of 30 s at 95°C (two-step denaturation) and 30 s at 60°C (combined annealing and extension step with fluorescence data collection). A negative control containing all reagents except template RNA was included in each run. (iv) Results were reported in relative quantification, where the normalized level of target gene expression was calculated by using the 2–ΔΔCq formula15.\n\nStatistical analysis was performed using SPSS (version 22.0, IBM Corp.). Qualitative data were expressed as percentages, whereas quantitative data were expressed as mean ± standard deviation. Skewed data were expressed as median and inter-quartile range. The Kruskal–Wallis test (H test) was applied for statistical comparison between three or more sets of data if one or more of them had a skewed distribution. The Mann-Whitney U-test (Wilcoxon rank-sum test) was used to compare two independent sets of data if one or both of them had a skewed distribution. Spearman's rank correlation coefficient (rs) was used to assess the degree of correlation between two sets of variables if one or both of them showed a skewed distribution. The diagnostic performance of lamin B1 was evaluated in terms of its diagnostic sensitivity, specificity and efficacy. The area under the curve (AUC) was used to describe the overall test performance. Multi–ROC curve was applied to allow the comparison of different rules over varying test thresholds.\n\n\nResults\n\nA highly statistically significant difference was found among the three studied groups with regards to both AFP and lamin B1 levels (H=23.4 and H=29.9, respectively; both p<0.01). Plasma levels of lamin B1 mRNA (2-ΔΔCq) was significantly higher in HCC patients than in chronic liver disease (CLD) patients and healthy controls with median (Q1–Q3) 3.9 (2–13.3), 0.9 (0.8–1.1) and 1 (0.9–1.2), respectively\n\nUpon comparison between two groups individually, AFP was significantly higher in group I versus II and III (Z=3.4, p<0.01 and Z=4.1, p<0.001, respectively). Similarly, the median levels of lamin B1 was significantly higher in group I versus II & III (Z=4.3, p<0.001 and Z=4.3, p<0.001, respectively). There was no significant difference between group II and group III as regards AFP and lamin B1 (Z=1.4 and Z=0.9, p>0.05, respectively) (Table 1).\n\nAFP, α-fetoprotein\n\nThe median levels of AFP were 7.9 ng/dl in patients with stage 0 disease, 243 ng/dl in those with stage A disease and 251 ng/dl in those with stage B disease. Regarding lamin B1, median expression levels were lowest in patients with stage 0 disease2, higher in those with stage A disease (5.1) and highest in stage B (19.4).\n\nComparison between the three stages of HCC patients showed a statistical significant difference in AFP ((H=9.9, p<0.01) and lamin B1 (H=24.4, p<0.001,). When comparing two stages individually, AFP was significantly lower in stage 0 vs stage A (p<0.05), in stage A vs stage B (p<0.05), and in stage 0 vs B (p<0.05). This statistical decrease was highly significant in case of lamin B1 (p<0.001, p<0.01, p<0.001 for stage 0 vs stage A, stage A vs stage B, and stage 0 vs B, respectively) (Table 2).\n\nAFP, α-fetoprotein.\n\nComparison of group II with the three stages of HCC (0, A& B) revealed a significantly lower values in group II versus each of stage 0, A and B for both AFP (Z=2.1, Z=3.4 and Z=3.2; p<0.05, p<0.01 and p<0.01), respectively) and lamin B1 (Z=3.4, Z=3.4 and Z=3.7; p<0.01, p<0.01 and p<0.001, respectively) (Table 3).\n\nAFP, α-fetoprotein\n\nRegarding the tumor size and number of nodules, comparison between HCC patients with one tumor nodule (T1) and those with two or more tumor nodules (T2/Tm) revealed that the first group was statistically significantly lower than the second group for AFP (Z=3.1, p<0.01) and lamin B1 (Z=4.5, p<0.001) levels. Moreover, comparison between HCC patients with tumor size >3 cm revealed a significantly higher values versus those with tumor size ≤3 cm as regards AFP and lamin B1 (Z=3.1, p<0.01 and Z=4.6, p<0.001), respectively.\n\nA significant positive correlation was observed between AFP and lamin B1 in group I (r=0.46, p<0.05), but not in group II or III (r=−0.11, p>0.05 and r=−0.41, p>0.05, respectively).\n\nTable 4 shows receiver operating characteristic (ROC) curve analysis applied to the study results to examine the diagnostic performance of lamin B1 and AFP as tumor markers in HCC at different cut-off values. At a cut-off level of 5.0 ng/dl, the diagnostic performance of AFP for differentiation between HCC cases and the two control groups showed 80% sensitivity, 90% specificity, 75% negative predictive value (NPV), 92.3% positive predictive value (PPV) and 84% efficacy, with an area under the curve (AUC) =0.822. The best cut-off to differentiate between group II and stage 0 was 3.5 ng/dl, with a 78.6% sensitivity, 60% specificity, 73.3% PPV, 66.7% NPV, 70.8% efficacy and AUC =0.762; the cut-off of 142 ng/dl AFP was used to differentiate between stage A and stage B, with 71.4% sensitivity, 92.9% specificity, 83.3% PPV, 86.7% NPV, 85.7% efficacy and AUC =0.844.\n\nSN%, sensitivity; SP%, specificity; PPV%, positive predictive value; NPV%, negative predictive value; EFF%, efficacy; AFP, α-fetoprotein.\n\nPlasma lamin B1 mRNA showed a much better performance to differentiate between HCC cases and the two control groups, where, at a 2–ΔΔCq cut-off of 1.4, sensitivity was 100%, specificity was 90%, NPV was 100%, PPV was 93.4% and efficacy was 96%, with an AUC =0.962. A 2–ΔΔCq cut-off of 1.3 was used to differentiate between patients with stage 0 HCC and CLD, with 100% sensitivity, 90% specificity, 100% NPV, 93.3% PPV, 95.8% efficacy and AUC =0.926. A 2–ΔΔCq cut-off of 2.8 was used to differentiate between patients with stage 0 and A, yielding 100% sensitivity, 92.9% specificity, 100% NPV, 87.5% PPV, 95.2% efficacy and AUC 0.972 (Figure 1 and Figure 2).\n\nArea under the curve: AFP, 0.822; LMNB1 mRNA, 0.962.\n\nArea under the curve: AFP, 0.762; LMNB1 mRNA, 0.962.\n\nMulti-ROC curve analysis was constructed to assess the diagnostic performance of a combination of both AFP (at a cut-off value of 3.5 ng/dl) and lamin B1 (at a 2–ΔΔCq cut-off value of 1.4) to discriminate between patients with HCC and those with CLD. At these cut-off values, the diagnostic sensitivity was 100%, specificity 100%, PPV 100%, NPV 100% and efficacy 100% (Figure 3).\n\nArea under the curve: AFP, 0.844; LMNB1 mRNA, 0.957; multi-ROC, 1.000.\n\n\nDiscussion\n\nResults of the present study revealed that there was a significantly higher level of AFP in group I (median =31.6 ng/dl) compared to chronic liver diseases patients (median= 3.5 ng/dl) and healthy controls group (median= 1.3 ng/dl). This was in agreement with the findings of Wei et al.16, who proved that AFP levels are significantly higher in patients with HCC in comparison to those with chronic liver disease (CLD) and healthy controls; they suggested that this increase is due to selective transcriptional activation of the AFP gene in the malignant hepatocytes.\n\nPlasma levels of lamin B1 mRNA (2-ΔΔCq) was significantly higher in HCC patients than in CLD patients and healthy controls. Similar results were achieved by Wong and Luk17. Sun et al.18 found that lamin B1 mRNA was detected in the plasma of 82% of group 1 subjects, whereas it was detected only in 19% of group 2 subjects and in 17% of those in group 3. Levels of lamin B1 mRNA were also significantly upregulated in patients with HCC.\n\nThe accumulated lamin B1 is released from the HCC cells, which have altered metabolism and are in state of oxidative stress, stimulating the p38 MAPK pathway. In addition, autoantibodies against lamin B1 were found to be positive in 17% of HCC patients, but none was found in CLD and healthy controls18. This may be attributed to circulating lamin B1 mRNA in plasma that results from lysis of cancer cells or being encoded by tumor-related genes19.\n\nIn the present study, statistical comparison of different Barcelona Stages in patients with HCC revealed that AFP levels were significantly higher in stages A and B (median, 243 and 251 ng/dl, respectively) than in stage 0 (median, 7.9 ng/dl). These results are in agreement with those of Peng et al.20, who found a significant increase in AFP levels corresponding to Barcelona Staging. This is also in accordance with the results of Zhang et al.21, who concluded that AFP can act as an independent prognostic factor for HCC, as it can induce the malignant progression of liver cancer via tumorigenesis and cellular growth, migration and invasion.\n\nWith regards to plasma levels of lamin B1 (2-ΔΔCq), there was a significant increase in patients with stages A and B HCC compared to patients with stage 0 disease, with a median (Q1–Q3) in stage 0, A and B of 2.0 (1.7–2.4), 5.1 (3.7–6) and 19.1 (14.7–28.2), respectively. Similarly, Sun et al.18, using western blot analysis, found that the expression of lamin B1 was positive in 71% of patients with early-stage HCC and positive in 83% in patients with late-stage HCC. This suggests that lamin B1 could induce increased invasiveness and promote progression. This may be related to the fact that lamin B1 is present in actively developing tumors17.\n\nThe potential value of lamin B1 as an early diagnostic marker of HCC was demonstrated in the present study, where a statistically significant increase in plasma level among stage 0 HCC patients was observed compared to CLD patients (group II). This indicates its importance in detecting the very early cases of HCC. In accordance with the present results, Sun et al.18 demonstrated that lamin B1 mRNA plasma levels were elevated in 76% of early cases of HCC, compared to 19% only in cirrhotic patients. The same sensitivity was revealed by Wong and Luk17. Lim et al.22 found that the expression level of the protein in cirrhotic tissue samples increased and rose even more in the tumorous tissue samples using MALDI-TOF mass spectrometry. This is due to the specific involvement of lamin B1 in carcinogenesis, since it increases in cancer cells more than cirrhotic cells17.\n\nComparison between patients with stage 0 HCC versus those with stages A and B disease revealed that the AFP levels in those with stage 0 disease was statistically significantly lower than in those with stages A and B. Similarly, AFP levels in patients with HCC with a tumor size >3 cm were statistically different to those in patients with a tumor size ≤3 cm. This is in agreement with the results of Peng et al.20, who revealed that serum AFP correlated with tumor size and high AFP (>200 ng/dl) was associated with large tumors (>5 cm).\n\nPrevious studies have revealed that serum AFP levels was found to be increased in HCC patients, with these increased levels being positively associated with tumor size and number of tumors24,25. This finding was in agreement with the postulation that AFP can act as a growth regulator. Increased proliferation in vitro in response to AFP has been observed for developing or embryonic cells and human hepatoma cells, but not untransformed cells, owing to the absence of specific membrane AFP receptors26.\n\nIn an attempt to study the prognostic significance of lamin B1, lamin B1 mRNA plasma levels (2-ΔΔCq) was compared between patients with one tumor nodule and those with two or more nodules. A statistically significant difference was demonstrated, with median (Q1–Q3), 2.0 (1.7–2.4) and 10.95 (5.1–22.3), respectively. Similarly there was a significant difference in lamin B1 expression in patients with tumor sizes ≤3 cm versus those with tumors >3 cm, with median (Q1–Q3) values of 2.0 (1.7–2.4) and 10.9 (5.1–22.3), respectively.\n\nSimilar results were achieved by Sun et al.18, who used proteomic analysis to demonstrate that overexpression of lamin B1 was significantly associated with an increased number of tumor nodules and the size of tumors. On applying conventional RT-PCR, (not real time PCR) there was an increase in the positivity rate of circulating lamin B1 mRNA that gradually increased with tumor stage progression. This could be related to the phosphorylation of lamin B1 mediated by phospholipase C1, resulting in cell proliferation via G2/M cell cycle progression, eventually increasing the tumor size and number26.\n\nIn the current study, there was a positive correlation between AFP and lamin B levels, in HCC patients. However, this disagreed with the results of Sun et al.18, who found no correlation between the two markers. This might be due to different techniques and different studied populations in the two studies.\n\nThe diagnostic performance of AFP was assessed using ROC curve analysis. At a cut-off level of 5.0 ng/dl, AFP was able to differentiate between HCC cases and the 2 control groups with 80% sensitivity, 90% specificity, 75% NPV, 92.3% PPV and 84% efficacy. These results were comparable to those of Kim et al.26, who, at a cut-off 70.4 ng/dl, had a rather lower diagnostic sensitivity (54.8%) but a higher specificity (100%). In a study by Yang et al.27, AFP in HCC patients revealed lower diagnostic sensitivity and specificity (of 68.7% and 61.9%, respectively).\n\nOn the other hand, plasma lamin B1 mRNA showed much better performance in differentiation between group 1 and groups 2 and 3, with 100% sensitivity, 90% specificity, 100% NPV, 93.4% PPV and 96% efficacy. Sun et al.18, observed a comparable diagnostic performance, with a sensitivity of 86% and specificity of 80%. Similarly, Wong and Luk17, showed a lower sensitivity (76%) and specificity (82%) for the detection of HCC when assessed with cirrhotic patients and healthy controls. Moreover, Liu et al.19, revealed that lamin B1 had a diagnostic performance of 100% sensitivity, which was similar to the current study, but a much lower specificity of 27%, 100% PPV and 70% NPV.\n\nIn conclusion, measurement of lamin B1 mRNA is highly recommended in patients with CLD with normal serum AFP, especially in known cirrhotic patients that deteriorate rapidly without any apparent etiology. Addition of plasma lamin B1 mRNA to the current standard tests for diagnosis of HCC as a new diagnostic and screening tool could greatly improve the ability to identify such patients and thus could allow them to benefit from earlier treatment.\n\n\nData availability\n\nDataset 1. Complete raw data associated with the study, including demographic information, infection status, tumor characteristics and Cq values. DOI: https://doi.org/10.5256/f1000research.14795.d21241328.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors declare that no grants were involved in supporting this work.\n\n\nReferences\n\nBosetti C, Turati F, La Vecchia C: Hepatocellular carcinoma epidemiology. Best Pract Res Clin Gastroenterol. 2014; 28(5): 753–770. PubMed Abstract | Publisher Full Text\n\nSiegel RL, Miller KD, Jemal A: Cancer statistics, 2015. CA Cancer J Clin. 2015; 65(1): 5–29. PubMed Abstract | Publisher Full Text\n\nZhao C, Nguyen MH: Hepatocellular Carcinoma Screening and Surveillance: Practice Guidelines and Real-Life Practice. J Clin Gastroenterol. 2016; 50(2): 120–133. PubMed Abstract | Publisher Full Text\n\nEl-Garem H, Abdel-Hafez H, Foaud A, et al.: Tissue biomarkers in the early detection of hepatocellular carcinoma among egyptian patients with chronic hepatitis C: A possible genetic profile. Br J Med Med Res. 2013; 3(4): 1858–1870. Publisher Full Text\n\nDauer WT, Worman HJ: New messages in the nuclear envelope. Cell Cycle. 2010; 9(4): 645–646. PubMed Abstract | Publisher Full Text\n\nSolovei I, Wang AS, Thanisch K, et al.: LBR and lamin A/C sequentially tether peripheral heterochromatin and inversely regulate differentiation. Cell. 2013; 152(3): 584–98. PubMed Abstract | Publisher Full Text\n\nBroers JL, Ramaekers FC, Bonne G, et al.: Nuclear lamins: laminopathies and their role in premature ageing. Physiol Rev. 2006; 86(3): 967–1008. PubMed Abstract | Publisher Full Text\n\nYang SH, Chang SY, Yin L, et al.: An absence of both lamin B1 and lamin B2 in keratinocytes has no effect on cell proliferation or the development of skin and hair. Hum Mol Genet. 2011; 20(18): 3537–3544. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim Y, Sharov AA, McDole K, et al.: Mouse B-type lamins are required for proper organogenesis but not by embryonic stem cells. Science. 2011; 334(6063): 1706–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCoradeghini R, Barboro P, Rubagotti A, et al.: Differential expression of nuclear lamins in normal and cancerous prostate tissues. Oncol Rep. 2006; 15(3): 609–13. PubMed Abstract | Publisher Full Text\n\nFoster CR, Przyborski SA, Wilson RG, et al.: Lamins as cancer biomarkers. Biochem Soc Trans. 2010; 38(Pt 1): 297–300. PubMed Abstract | Publisher Full Text\n\nBruix J, Sherman M, American Association for the Study of Liver Diseases: Management of hepatocellular carcinoma: an update. Hepatology. 2011; 53(3): 1020–1022. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPugh RN, Murray-Lyon IM, Dawson JL, et al.: Transection of the oesophagus for bleeding oesophageal varices. Br J Surg. 1973; 60(8): 646–649. PubMed Abstract | Publisher Full Text\n\nManini MA, Sangiovanni A, Fornari F, et al.: Clinical and economical impact of 2010 AASLD guidelines for the diagnosis of hepatocellular carcinoma. J Hepatol. 2014; 60(5): 995–1001. PubMed Abstract | Publisher Full Text\n\nPage RB, Stromberg AJ: Linear methods for analysis and quality control of relative expression ratios from quantitative real-time polymerase chain reaction experiments. Scientific World Journal. 2011; 11: 1383–1393. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWei JJ, Xiao BM, Liang T, et al.: Shanghai, China Hepatobiliary. Pancr Dis Int. 2006; 5: 25.\n\nWong KF, Luk JM: Discovery of lamin B1 and vimentin as circulating biomarkers for early hepatocellular carcinoma. Djuro josic and Douglas C. Hixson (eds.), Liver proteomics: Methods and protocols, Methods Mol Biol. 2012; 909. : 295–310. PubMed Abstract | Publisher Full Text\n\nSun S, Xu MZ, Poon RT, et al.: Circulating Lamin B1 (LMNB1) biomarker detects early stages of liver cancer in patients. J Proteome Res. 2010; 9(1): 70–78. PubMed Abstract | Publisher Full Text\n\nLiu H, Zhang J, Wang S, et al.: Screening of autoantibodies as potential biomarkers for hepatocellular carcinoma by using T7 phase display system. Cancer Epidemiol. 2012; 36(1): 82–88. PubMed Abstract | Publisher Full Text\n\nPeng SY, Chen WJ, Lai PL, et al.: High alpha-fetoprotein level correlates with high stage, early recurrence and poor prognosis of hepatocellular carcinoma: significance of hepatitis virus infection, age, p53 and beta-catenin mutations. Int J Cancer. 2004; 112(1): 44–50. PubMed Abstract | Publisher Full Text\n\nZhang N, Gu J, Yin L, et al.: Incorporation of alpha-fetoprotein(AFP) into subclassification of BCLC C stage hepatocellular carcinoma according to a 5-year survival analysis based on the SEER database. Oncotarget. 2016; 7(49): 81389–81401. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLim SO, Park SJ, Kim W, et al.: Proteome analysis of hepatocellular carcinoma. Biochem Biophys Res Commun. 2002; 291(4): 1031–1037. PubMed Abstract | Publisher Full Text\n\nFarinati F, Marino D, De Giorgio M, et al.: Diagnostic and prognostic role of alpha-fetoprotein in hepatocellular carcinoma: both or neither? Am J Gastroenterol. 2006; 101(3): 524–532. PubMed Abstract | Publisher Full Text\n\nFurihata T, Sawada T, Kita J, et al.: Serum alpha-fetoprotein level per tumor volume reflects prognosis in patients with hepatocellular carcinoma after curative hepatectomy. Hepatogastroenterology. 2008; 55(86–87): 1705–1709. PubMed Abstract\n\nFiume R, Ramazzotti G, Teti G, et al.: Involvement of nuclear PLCbeta1 in lamin B1 phosphorylation and G2/M cell cycle progression. FASEB J. 2009; 23(3): 957–66. PubMed Abstract | Publisher Full Text\n\nKim MJ, Bae KW, Seo PJ, et al.: [Optimal cut-off value of PIVKA-II for diagnosis of hepatocellular carcinoma--using ROC curve]. Korean J Hepatol. 2006; 12(3): 404–411. PubMed Abstract\n\nYang GH, Fan J, Xu Y, et al.: Osteopontin combined with CD44, a novel prognostic biomarker for patients with hepatocellular carcinoma undergoing curative resection. Oncologist. 2008; 13(11): 1155–1165. PubMed Abstract | Publisher Full Text\n\nAbdelghany AM, Rezk NS, Osman MM, et al.: Dataset 1 in: Using Lamin B1 mRNA for the early diagnosis of hepatocellular carcinoma: a cross-sectional diagnostic accuracy study. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.14795.d212413"
}
|
[
{
"id": "37615",
"date": "06 Sep 2018",
"name": "Dina Fekry",
"expertise": [
"Reviewer Expertise Diabetes mellitus"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nHCC is the most common primary malignancy of hepatocytes.It is one of the most common causes of cancer world wide.\nThe nuclear Lamina is a proteinaceous meshwork and it is essential for a number of cellular functions. Lamin B1 was found to be essential for nuclear integrity and cell survival.\nThis article aimed at determination of the diagnostic as well as prognostic value of Lamin B1 in HCC patients together with AFP.\nThe study was carried out on 30 patients with HCC which were further classified into stage 0,A and B according to size and number of tumor nodules,10 patients with CLD considered as patient controls together with 10 healthy patients considered as healthy controls. Measurement of mRNA of Lamin B1 was done by RT-qPCR however, measurement of AFP was done by electrochemiluminescent assay. Statistical analysis was done using SPSS( version 22.0, IBM corp.).MultiROC curve was used to evaluate the diagnostic sensitivity and specificity of Lamin B1in HCC. Presented results showed that AFP was significantly higher ion HCC patients when compared to other groups. As for Lamin B1, it was significantly higher in stages A and B in HCC patients when compared to 0 stage. Besides, it was significantly higher in stage 0 compared to CLD giving it a prevalage as an early marker of HCC. Also, its prognostic significance was clear since it was significantly higher in tumors with bigger nodules.\nIn my opinion, the article showed full information of methods and analysis however, authors may consider to include the Barcelona classification of HCC in the abstract in order to understand the results which include 0, A, B stages. In addition, mentioning the relation of Lamin B1 to liver cell and how it affects liver cell integrity may be of great importance for the core of the article. The statistical analysis was impressing and totally appropriate however, correlation between AFP and Lamin B1 in HCC group would be preferably introduced in a table. Finally, the conclusions drawn were adequately supported by the presented results.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "39775",
"date": "30 Oct 2018",
"name": "Sherief Abd-Elsalam",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors investigated Lamin B1 mRNA for the early diagnosis of hepatocellular carcinoma. A good job you did. The work is technically sound and the work is clearly presented. However, it needs adding important recent references about prevalence of HCC in Egypt. Additionally, there are minor comments need to be addressed:\nIn introduction; the data about HCC prevalence in egypt is too old since 2003; is too old; you should add updated citations about HCC prevalence in Egypt; I suggest these two references:\nPrevalence of hepatocellular carcinoma in chronic hepatitis C patients in Mid Delta, Egypt: A single center study1. Epidemiology of liver cancer in Nile delta over a decade: A single-center study2.\n\nIn methods; you should clarify why the small number of patients in group II, III. Is it the cost?\nIn discussion, You should add limitations of the study. The most important one is small sample size.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1339
|
https://f1000research.com/articles/7-623/v1
|
22 May 18
|
{
"type": "Study Protocol",
"title": "Study protocol for the Anesthesiology Control Tower—Feedback Alerts to Supplement Treatments (ACTFAST-3) trial: a pilot randomized controlled trial in intraoperative telemedicine",
"authors": [
"Stephen Gregory",
"Teresa M. Murray-Torres",
"Bradley A. Fritz",
"Arbi Ben Abdallah",
"Daniel L. Helsten",
"Troy S. Wildes",
"Anshuman Sharma",
"Michael S. Avidan",
"ACTFAST Study Group",
"Stephen Gregory",
"Teresa M. Murray-Torres",
"Bradley A. Fritz",
"Arbi Ben Abdallah",
"Daniel L. Helsten",
"Troy S. Wildes",
"Anshuman Sharma"
],
"abstract": "Background: Each year, over 300 million people undergo surgical procedures worldwide. Despite efforts to improve outcomes, postoperative morbidity and mortality are common. Many patients experience complications as a result of either medical error or failure to adhere to established clinical practice guidelines. This protocol describes a clinical trial comparing a telemedicine-based decision support system, the Anesthesiology Control Tower (ACT), with enhanced standard intraoperative care. Methods: This study is a pragmatic, comparative effectiveness trial that will randomize approximately 12,000 adult surgical patients on an operating room (OR) level to a control or to an intervention group. All OR clinicians will have access to decision support software within the OR as a part of enhanced standard intraoperative care. The ACT will monitor patients in both groups and will provide additional support to the clinicians assigned to intervention ORs. Primary outcomes include blood glucose management and temperature management. Secondary outcomes will include surrogate, clinical, and economic outcomes, such as incidence of intraoperative hypotension, postoperative respiratory compromise, acute kidney injury, delirium, and volatile anesthetic utilization. Ethics and dissemination: The ACTFAST-3 study has been approved by the Human Resource Protection Office (HRPO) at Washington University in St. Louis and is registered at clinicaltrials.gov (NCT02830126). Recruitment for this protocol began in April 2017 and will end in December 2018. Dissemination of the findings of this study will occur via presentations at academic conferences, journal publications, and educational materials.",
"keywords": [
"telemedicine",
"decision support",
"protocol",
"randomized controlled trial"
],
"content": "Introduction\n\nEach year, over 300 million surgical procedures are performed worldwide1. Unfortunately, many patients will experience significant morbidity or mortality in the postoperative period2. Research conducted at our institution and others has demonstrated an early postoperative mortality rate ranging from 1–5% and 90-day to 1-year mortality rates between 5–10%2–13. Additionally, 10–40% of patients will experience some type of postoperative surgical complication, including surgical site infection, respiratory complications, myocardial infarction, stroke and acute kidney injury, resulting in a three- to seven-fold increase in postoperative mortality3,4,11,12.\n\nDespite the overall decline in surgical morbidity and mortality over time, the risk of perioperative adverse events remains substantial2. Some of this risk may be a manifestation of either underlying patient pathology or the complexity of the surgical procedure itself9,12,14,15. However, evidence also suggests that medical errors contribute considerably to negative patient outcomes16,17. Although some errors may be considered active, such as the administration of an incorrect medication, the failure to follow established clinical practice guidelines and recommendations likely has a more significant overall detrimental effect on patient outcomes. Prior studies have documented that deviation from evidence-based standards of care is common, and that this deviation results in poorer patient outcomes18–22.\n\nInterventions to improve patient safety and outcomes remain a major focus in anesthesiology. The complexity of anesthetic practice can lead to frequent cognitive errors in the perioperative arena23,24, suggesting that the development of a real-time, tailored feedback system to support intraoperative decision-making may be valuable. The development of automated feedback and alerting systems has been demonstrated to improve adherence to a number of treatment guidelines25–42. However, the impact of decision support systems appears to decay over time43–46, and improvements in process variables may not translate into improved patient outcomes47.\n\nIn the intensive care unit (ICU), the use of remote monitoring to augment care, commonly referred to as “telemedicine,” decreases ICU mortality and the length of ICU stay, and improves adherence to clinical practice guidelines48–52. While this type of clinical decision support has seen robust adoption in the critical care setting, its utilization in the intraoperative care of surgical patients is limited50. In light of the benefits that have been demonstrated from using telemedicine in the ICU setting, we believe that the implementation of such a system in the operating room has the potential to elevate the general safety and quality of perioperative care.\n\nWe have designed a multifaceted approach for the development and institution of an Anesthesiology Control Tower (ACT) to provide real-time intraoperative telemedicine decision support. In the first component of our approach, we outlined a strategy of iterative usability testing and platform modification that allowed us to develop a high-fidelity, user-centered system53. We intend to continue separate usability analyzes over the course of the pilot trial in order to evaluate the key usability elements of effectiveness, efficiency, and satisfaction54 in a more real-world setting. Because the impact of a clinical intervention is dependent on the success of the process through which it is implemented55, we will also evaluate implementation outcomes that are relevant to the use of the ACT in the perioperative setting56,57. In the second component of our approach, we will employ large-scale data analytics, integrating perioperative information in order to create forecasting algorithms for negative patient trajectories58. In the current manuscript, we describe the third element of our investigation: a pilot randomized controlled trial that aims to demonstrate the superiority of the ACT in improving adherence to best care practices when compared to enhanced usual care.\n\n\nMethods and analysis\n\nThe ACTFAST-3 study is a pragmatic comparative effectiveness trial that is taking place at an academic university-affiliated and adult tertiary care hospital in the United States that performs over 19,000 surgeries a year. We plan to enroll approximately 12,000 patients over the study period, with approximately 6,000 patients in the control arm and 6,000 patients in the intervention arm (Figure 1). Patients will be included with a waiver of informed consent, as approved by the Human Research Protection Office (protocol number 201603038), as the risk associated with the ACT has been deemed to be minimal. Randomization will occur at the level of individual operating rooms on a daily basis.\n\nThe ACT will monitor all patients in both the control and intervention operating rooms using information gathered from the electronic medical record (EMR) and from a customized version of a perioperative monitoring and alerting program called AlertWatch® (Ann Arbor, MI). AlertWatch is an FDA-cleared (KI3O4OI) system that displays integrated patient information and alerts clinicians to physiologic derangements. It was recently demonstrated that use of the AlertWatch software was associated with improvements in several process measures, although this did not translate into an effect on clinical outcomes47. For the purposes of our intervention, the commercially available AlertWatch platform was heavily modified through usability testing53 to create a customized AlertWatch “Control Tower” mode that is only available within the ACT (Figure 2 and Figure 3). The standard platform will remain available to all OR clinicians during this study. The ACT will provide clinicians in the intervention ORs with real-time feedback based on the available electronic resources, including AlertWatch Control Tower. Anesthesia providers in rooms assigned to the control group will also be monitored but will not receive decision support. Notably, the standard medical staffing models for providing an anesthetic will not be affected with this intervention, as the ACT is designed to augment decision-making, rather than replace critical team members.\n\n(A) AlertWatch® Control Tower Census View. This view shows summary information for operating rooms with ongoing procedures. Physiological alerts (e.g., low blood pressure) are shown as black or red squares, depending on the severity of the derangement, with red indicating a more severe abnormality. Checkmarks appear inside an operating room when an alert is triggered that has been classified as actionable and requires a resonse on the part of the clinicians in the Control Tower (see Figure 3). Control rooms are indicated with a “Do Not Contact” symbol. (B) AlertWatch® Control Tower Patient Display View. This deidentified intraoperative patient display demonstrates organ-specific information individualized to each patient. Colors outlining organs indicate normal (green), marginal (yellow) or abnormal function (red). Orange would indicate an organ system at risk due to pre-existing conditions. The left side of the display shows patient characteristics and the case information. Lab values, if available, are listed beneath the kidneys. Alerts generated by the AlertWatch® system are listed on the right-hand side of the display. Specific alerts, determined by the study team to be clinically significant and actionable, trigger a checkmark to appear at the bottom left of the screen. This informs the Anesthesiology Control Tower (ACT) clinician that an alert is present that must be addressed. Clicking on this checkmark allows clinicians in the ACT to review and address these alerts (Figure 3).\n\nClinicians in the Anesthesiology Control Tower (ACT) use the Case Review window to address actionable Control Tower alerts, indicated by checkmarks on the Census View and the Patient Display. Within this Case Review window, clinicians document their assessment of the significant of each alert, what action they would recommend, and, in the case of intervention operating rooms (ORs), the reaction of the clinician in the OR to the ACT support.\n\nThe primary outcome measures in the ACTFAST-3 pilot study are compliance with best care practices for intraoperative temperature management and intraoperative blood glucose management (Table 1). We will also explore additional intraoperative process measures in addition to surrogate outcomes (Table 2). The incidence of intraoperative hypotension and the incidence of postoperative renal dysfunction, atrial fibrillation, respiratory failure and delirium will be assessed via review of the EMR. Other postoperative complications, including intraoperative awareness, surgical site infection, readmission, and death will be assessed via analysis of the existing Center for Clinical Excellence Registry, American College of Surgeons’ National Surgical Quality Improvement Program (NSQIP) database, Society of Thoracic Surgery (STS) database, and Systematic Assessment and Targeted Improvement of Services Following Yearlong Surgical Outcomes Surveys (SATISFY-SOS) database59. Outcomes related to the usability of the ACT intervention, including efficiency and efficacy of the software platform, will be obtained from AlertWatch data logs. These logs will also be used to obtain data related to the feasibility of implementing the pilot ACT. User satisfaction will be assessed through surveys administered to members of the anesthesia department.\n\nThe trial will include all adult patients undergoing surgery at two campuses of an academic university-associated hospital, Barnes-Jewish Hospital (South Campus and Parkview Tower) (St. Louis, MI, USA), between 7:00 AM and 4:00 PM Monday through Friday (Figure 1). This includes a total of 48 operating room locations. The ACT will function on days when at least two anesthesia providers are available, one of whom must be an attending anesthesiologist. Patients undergoing surgical procedures with greater than 50% of the case length occurring outside of the ACT hours will be excluded from analysis. All patients younger than 18 will also be excluded from the study. Patients who undergo multiple surgeries in a single hospitalization or who have a second surgical procedure within 30 days of their initial surgery will be analyzed according to their initial randomization assignment. Patients returning for a second surgery more than 30 days after their initial surgical encounter will be considered as separate patients in the analysis. We will also obtain data from a group of historical control patients for the 6 months prior to the initiation of the ACTFAST-3 study, as part of an analysis related to potential sources of bias and contamination.\n\nA randomization algorithm integrated into the AlertWatch system will direct patient group allocation on a daily basis. Due to the nature of the intervention in this study, clinicians working in the ACT and those randomized to receive support cannot be blinded to the intervention. Researchers responsible for extracting data during the course of the study will be blinded to group allocation at the time of extraction.\n\nA multidisciplinary team of clinicians in the ACT will remotely monitor all active operating rooms at the campus of interest. ACT clinicians will include attending anesthesiologists, anesthesiology fellows, anesthesiology residents, and certified and student registered nurse anesthetists. Information will be obtained in near real-time from multiple complementary sources, including the AlertWatch Control Tower software (Figure 2) and the EMR. The clinicians in the ACT will use this information to communicate with OR clinicians to help maintain compliance with intraoperative best care practices and to assist with the detection and management of physiological derangements32,60–63. These clinicians will evaluate all alerts generated by the AlertWatch Control Tower notification system (Figure 3), including alerts from both the intervention and the control operating rooms. For ORs allocated to the intervention arm, the ACT will deliver decision support to the primary personnel caring for the patient via text message or telephone call. The clinician receiving the alert will determine the applicability of the alert to the clinical situation and will choose whether to carry out any recommendations sent by the ACT. In patients with a persistent critical event, the ACT will offer real-time assistance with crisis resource management.\n\nOperating rooms assigned to the control group will undergo the same monitoring and assessment by the ACT, but clinicians in these ORs will not receive any contact from the ACT. However, if clinicians staffing the ACT feel ethically obliged to contact a room assigned to the control group due to perceived potential for imminent and significant patient harm, they will be able to do so. Although we anticipate that this will be a rare occurrence, it will still be documented and reported as part of our study outcomes.\n\nData collection for this study will utilize multiple sources to extract outcome measures64. All alert data generated by the AlertWatch Control Tower platform will be automatically logged to a secure database, including all responses by the providers in the ACT to individual alerts (Figure 3). Data from the perioperative period will be imported from Metavision® (iMDsoft, Wakefield, Massachusetts, USA), the anesthesiology information management software system currently in use by the Department of Anesthesiology. In addition to capturing comprehensive intraoperative clinical data, Metavision® also stores preoperative information, such as patient characteristics, clinical and surgical history, comorbidities, and data from the immediate post-operative period. Of note, during the anticipated duration of this trial, our hospital system will be transitioning to Epic Systems software (Verona, WI, USA) for both the hospital electronic health record and the anesthesiology information management software. Postoperative data for patient outcomes will be obtained from the inpatient EMR record system, and from clinical registries (SATISFY-SOS, NSQIP, STS).\n\nThe primary outcome measures in the ACTFAST-3 study are compliance with recommendations for intraoperative temperature management and intraoperative blood glucose management (Table 1). Data on primary outcomes measures will be recorded to an SQL server.\n\nSecondary intraoperative outcomes will include several process, surrogate, clinical measures (Table 2). Intraoperative process outcomes will include blood pressure management, compliance with recommendations for repeat dosing of antibiotics and for temperature monitoring, management of hyperglycemia, documentation of train of four monitoring following neuromuscular blockade, and adherence to strategies for intraoperative low tidal volume ventilation. Additionally, the impact of the ACT on volatile anesthetic usage will be assessed. We will also evaluate surrogate and clinical outcomes, specifically, the incidence of postoperative acute renal failure, postoperative atrial fibrillation, postoperative respiratory failure, postoperative delirium, intraoperative awareness, surgical site infection, 30-day hospital readmission, and 30-day mortality. Data will be obtained from review of electronic health records and cross-referencing of patients in the ACTFAST study with other surgical databases, as described above. We will also track the incidence of provider-reported intraoperative adverse events via a review of the departmental quality improvement database. Feasibility of implementing the ACT will be determined in part by examining the number of potentially staffed days versus the actual number of staffed days. Usability outcomes will include metrics such as the median number of alerts addressed by provider and across time.\n\nComparisons between groups will be with parametric and non-parametric statistical tests, as appropriate. Fisher’s exact or χ2 test will be used to evaluate primary outcome measures with regards to the following proportions: (i) the proportion of patients with a last-documented intraoperative temperature greater than 36 degrees Celsius; and (ii) the proportion of patients arriving to the post-anesthesia care unit or ICU with a blood glucose greater than 180 mg/dl. Contingency statistical tests will be used to compare occurrence of hypothermia and hyperglycemia between groups. Secondary outcomes will be compared between groups using χ2 or Fisher’s exact test for categorical outcomes, and two-sided t tests with unequal variances for comparison of means. By convention, statistical significance will be based on a two-sided p value <0.05. All statistical testing will performed using SAS® version 9.4 (SAS Institute Inc., Cary, North Carolina, USA). The small subset of rare patients in the control group whose provider may be contacted by the ACT clinicians out of concern for a significant patient safety event will be included in the control group in an intention-to-treat analysis. A sensitivity analysis will also be performed with inclusion of these patients in the intervention group. The frequency and rationale for contacting these rooms will be reported as part of our trial results.\n\nOnce the ACT intervention is executed, we anticipate several sources of contamination effect in the control group. There is a high likelihood of a robust Hawthorne effect due to OR clinician awareness of the ACT monitoring65,66. Also, all clinicians in the OR will eventually be included in the intervention group, due to the unit of randomization, and will likely become aware of the best management practices of interest in this trial. Therefore, even on days when they do not receive ACT support, clinicians may change their behavior, leading to overlapping improvements in both groups over the course of the study. Additionally, utilization of the AlertWatch software by clinicians in the ORs may increase over time. Learning effects might manifest most strongly among clinicians who staff the ACT and are therefore sensitized to the interventions and outcomes in this study. In order to evaluate the extent of the contamination and Hawthorne effects, we will collect baseline data for the group of historical controls. For categorical variables, contamination will be analyzed using logistic regression with a three-level categorical variable representing group assignment (historical cohort, control group, or intervention group); continuous variables will be analyzed using ANCOVA or non-parametric ANCOVA67. Additionally, we will track which operating rooms utilize the AlertWatch system intraoperatively, and will plan to perform a subgroup analysis to assess the effect of the ACT in this subset of patients.\n\nWithin the AlertWatch system, all alerts that are generated are automatically logged to a secure database, as are all responses of the ACT clinicians to these alerts (Figure 3). We will analyze these logs to determine how clinicians in the ACT monitor patients, address alerts, and interact with OR clinicians, and how OR clinicians respond to the ACT support. This data will allow us to explore aspects of the real-world usability of the ACT intervention related to efficiency and effectiveness, and will complement information gathered from qualitative usability surveys administered to department members.\n\nIn this study, we plan to enroll a convenience sample of 12,000 patients over the course of the study period, based on the staffing available for the ACT and the usual daily surgical volume of approximately 125 cases. Power analysis was based on the two primary outcomes defined for this study, with the following assumptions:\n\ni) Regarding the core-temperature outcome, we conservatively assumed that only 80% of Barnes-Jewish Hospital patients have their core temperature recorded during surgery. Among patients with their temperature documented, the target for this outcome was that the ACT intervention will increase the proportion of patients whose final recorded intraoperative temperature is above 36°C from 60% to 95%. For this calculation we assumed a standard deviation of core temperature of 0.9 degrees Celsius for both groups, based on an unpublished EMR audit.\n\nii) Regarding the primary outcome of glucose control, we assumed that the prevalence of diabetes mellitus among Barnes-Jewish Hospital surgical patients is about 20%, based on our EMR data over the past 5 years. Based on the same data, we also assumed that currently 60% of our diabetic patients reach a blood glucose >180 mg/dl at any point during surgery. Our goal was that the ACT intervention will reduce the proportion of patients arriving to the Post Anesthesia Care Unit (PACU) with a blood glucose value greater than 180 mg/dl from 60% to 40%.\n\nA statistical power calculation based on the above assumptions was performed for each of the two primary study outcomes to determine whether the sample size (N=12,000) allocated for this study is adequate. The effective sample size for the study was defined as the largest sample needed to achieve any of the two stated outcomes. We mainly powered all targeted outcomes to detect a difference in proportions (adjusted for contamination between the two study groups) in a completely balanced clustered-randomized design study (24 operating rooms in each group) using two-sided Z-test statistics. We also assumed a minimum to 90% power, a significance level of 0.05, an intracluster correlation coefficient (ICC) varying between 0.01 and 0.05 by a small increment of 0.005, and a coefficient of variation of cluster sizes of 0.50. Table 3 shows the required sample per operating room as well as the overall sample needed to achieve the study targeted outcomes. The largest sample was required for the proportion of patients whose last recorded intraoperative temperature is equal to or greater than 36°C (N=11,472). This value was sufficient for the other primary outcome.\n\n†See Table 1 for full explanation of outcomes.\n\n*High contamination effects were set to reach 67% as 2 out of 3 physicians will participate in the ACT.\n\nWhile the primary goal of the ACTFAST-3 study is to evaluate the impact of the ACT on patient care and outcomes, the structure and environment of the ACT has allowed for the creation of a novel curriculum in perioperative medicine. The current educational paradigm for anesthesiology residents primarily focuses on the management of individual patients in the perioperative setting. However, the substantial increase in requirements for surgical procedures, a projected shortage of anesthesiologists, and financial constraints in healthcare suggest that it will eventually be infeasible for anesthesiologists to provide the level of supervision that is currently standard in the United States (e.g. one anesthesiologist for every one to four ORs)68. There is currently little emphasis in anesthesiology education on process management and multitasking and caring for multiple patients in a complex care environment. With the support of the residency program director and departmental chair, we have revised the residency curriculum at our institute to allow each anesthesia resident to spend 2 weeks in the ACT during their final year of residency. We plan to implement an educational curriculum in perioperative telemedicine, focusing on the utilization of healthcare system resources to optimize intraoperative management, improve quality, and provide oversight of multiple patients undergoing complex surgical procedures.\n\nWe do not anticipate the occurrence of significant adverse events during this study. However, the primary investigator and the study team will review any adverse events identified by the departmental quality improvement program as potentially attributable to the ACT. The occurrence of any significant adverse events will be reported to the HRPO, and the study team and HRPO would decide together whether to halt the trial. No formal data-monitoring committee will used. There will be no audit of trial conduct during the investigation, although data recorded via the AlertWatch system will be reviewed and analyzed to determine appropriate group allocation and inclusion in the final analysis. No interim data analysis is planned for this pilot trial unless unanticipated safety issues are identified. There are no provisions for post-trial care or compensation to patients enrolled as part of this trial, as the intervention in the ACTFAST-3 trial involves only the addition of real-time decision-support tools and does not change existing anesthesia care models.\n\nThe risk of breach of confidentiality will be minimized. The data necessary for the completion of the trial will be protected by passwords and is contained in applications that are compliant for protected healthcare information (PHI). AlertWatch meets this same standard of protection. Individual clinical alerts and the ACT evaluation of these alert will be documented using an electronic data capture tool in the AlertWatch system. Outcomes data will be stored on one of two Washington University Department of Anesthesiology servers (a SQL server or a Windows file server). Only trained employees of the Department of Anesthesiology or Barnes Jewish Healthcare are granted access to resources on this network. Access to the contents of this study will be further restricted to approved personnel only, using server-level permission access (for the SQL server), or Windows folder permission settings (for the file server). It is a strict policy that PHI cannot be saved or reviewed outside of this protected environment. Whenever possible, extracts for this project will avoid the use of this information. Data extracts can be reconnected to PHI using a special, non-PHI primary key, which this group has successfully used with previous studies.\n\nThe ACTFAST pilot study has important strengths. It is a randomized clinical trial conducted in a high volume, real world clinical setting and can be conducted efficiently, as many components of the proposed study are incorporated into existing infrastructures and processes at Washington University. This includes access to existing information technology resources and to established and ongoing registries (SATISFY-SOS, NSQIP and STS). The data required for analysis of the primary outcome measures are routinely recorded on every patient undergoing surgery at our institution, and the databases used for analysis of secondary surrogate and clinical outcomes also all have high levels of data fidelity.\n\nRandomization of anesthesiology care teams can be easily implemented, and the process for providing feedback alerts does not require any advanced preparation on the part of clinicians working in the OR. These clinicians will participate in the ACTFAST trial in the course of their routine clinical work, and the impact on overall workflow and workload will be minimized through the testing in our first phase of the study53. We anticipate that it will be feasible to staff the ACT during the pilot RCT. The feasibility is enhanced by participation of a highly committed cadre of attending anesthesiologists and all of the residents in the anesthesiology department, as well as an experienced team of investigators that has established a track record of collaboration and completion of major clinical trials.\n\nThe following limitations should be considered. The AlertWatch software is currently available on all computers in the OR, and in-room provider utilization of AlertWatch may increase over the course of the study. In response, we plan to conduct a subgroup analysis with user log-in data to ascertain the impact of in-room software utilization, defined as documentation of intraoperative provider log-in to the AlertWatch system. Also, the ACTFAST study will be vulnerable both to Hawthorne and contamination effects. While we do not think that these effects can be eliminated, we have considered how best to account for them in the analyses. An important constraint and possible source of bias will be that it will not be possible to ensure blinding of OR clinicians as any communication from the ACT will inform them that their operating room is in the intervention group on that day69. However, clinicians outside of the OR, and the researchers responsible for extracting data, will be blinded to group assignment.\n\nAnother potential source of bias involves the existing surgical databases that will be used during analysis (i.e. STS, NSQIP, SATISFY-SOS). These registries themselves may be biased according to which patients choose to participate, with individual patients’ outcomes impacting their willingness or ability to provide reliable information, and which patients are contactable. We have been attempting to mitigate this source of bias by employing three modalities (e-mail, telephone and mail) to reach patients postoperatively in one such study59. Overall, the registries have impressive response rates, and there does not appear to be systematic bias in any of these registries based on baseline patient characteristics. Therefore, we expect our data sources to be robust, with minimal deficiencies.\n\nThis study was approved by the HRPO at Washington University (St. Louis, MI, USA, protocol number 201603038). This protocol is written in compliance with the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist with consideration of the Consolidated Standards of Reporting Trials (CONSORT) guidelines70,71.\n\nIf the results of the pilot ACTFAST-3 trial show benefit, the pilot study will likely be replicated as a larger, multicenter study for further validation that this intervention remains beneficial and that it is feasible to institute at other centers. We also anticipate the expansion of the ACT into the surrounding healthcare facilities within our hospital system. Larger trials could focus on expanded clinical and patient-reported outcomes (e.g. death, renal failure, delirium, duration of mechanical ventilation, intensive care length of stay, post-discharge disposition, postoperative falls, return to work, disability-free survival). The ACT infrastructure could also be used to explore current controversies in perioperative care by testing candidate experimental interventions (e.g., fluid management strategies, blood transfusion triggers). We envision that national implementation of the ACT concept would occur, which would be comparable to the path that similar programs for intensive care units have followed.\n\nAny significant changes to the protocol or the analysis plan during the trial will be communicated directly to the Washington University HRPO, as well as via update of the ACTFAST-3 registration at clinicaltrials.gov (ClinicalTrials.gov Identifier: NCT02830126). We also plan to publish any modifications made to this protocol during dissemination of the results of the trial. Authorship for the final trial data will be determined in accordance with International Committee of Medical Journal Editors (ICMJE) guidelines.\n\nData from the ACTFAST-3 trial will be made available for analysis in compliance with the recommendations of the ICMJE72. For this study, individual participant data that underlie the results of the trial will be made available after appropriate deidentification, along with the study protocol and statistical analysis plan. We plan to make this information accessible to researchers who provide a methodologically appropriate proposal for the purpose of achieving the aims of that proposal. Data will be available beginning 9 months and ending 36 months following trial publication at a third-party website. Data requestors will need to sign a data access agreement to gain access to trial data. Proposals should be directed to avidanm@wustl.edu.\n\n\nConclusions\n\nDespite aggressive efforts aimed to improve the quality of perioperative care, the risk of morbidity and mortality following a major surgical procedure remains substantial. In this protocol, we describe a pilot pragmatic, randomized, controlled trial in intraoperative telemedicine that examines the ability of a novel system of real-time feedback to improve adherence to perioperative best care practices. We hypothesize that the implementation of the ACT will be feasible and that it will increase clinician compliance with clinical practice standards. The development of the ACT, as described in this protocol, will also lay the groundwork for a subsequent large randomized controlled trial examining the utility of the ACT in improving patient outcomes following surgical procedures.\n\nThe findings from the trial will be disseminated in the form of posters and oral presentations at scientific conferences, as well as publications in peer-reviewed journals. Updates and results of the study will be available at https://clinicaltrials.gov/ct2/show/NCT02830126.\n\n\nData availability\n\nNo data is associated with this study.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe ACTFAST-3 project, including this protocol, has been funded by a grant from the Agency for Healthcare Research and Quality (R21 HS24581-01).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript..\n\n\nAcknowledgements\n\nMembers of the ACTFAST study group are as follows: Stephen Gregory, Teresa M. Murray-Torres, Bradley A. Fritz, Arbi Ben Abdallah, Daniel L. Helsten, Troy S. Wildes, Anshuman Sharma, Yixin Chen*, Mary Politi*, Alex Kronzer*, Bernadette Henrichs*, Brian A. Torres*, Sherry McKinnon*, Thaddeus Budelier*, Walter Boyle*, Bruce Hall*, Benjamin Kozower*, Sachin Kheterpal*, Michael S. Avidan\n\n*Contributor\n\n\nReferences\n\nWeiser TG, Regenbogen SE, Thompson KD, et al.: An estimation of the global volume of surgery: a modelling strategy based on available data. Lancet. 2008; 372(9633): 139–44. PubMed Abstract | Publisher Full Text\n\nBainbridge D, Martin J, Arango M, et al.: Perioperative and anaesthetic-related mortality in developed and developing countries: a systematic review and meta-analysis. Lancet. 2012; 380(9847): 1075–81. PubMed Abstract | Publisher Full Text\n\nDimick JB, Pronovost PJ, Cowan JA Jr, et al.: Variation in postoperative complication rates after high-risk surgery in the United States. Surgery. 2003; 134(4): 534–40; discussion 540–1. PubMed Abstract | Publisher Full Text\n\nHamel MB, Henderson WG, Khuri SF, et al.: Surgical outcomes for patients aged 80 and older: morbidity and mortality from major noncardiac surgery. J Am Geriatr Soc. 2005; 53(3): 424–9. PubMed Abstract | Publisher Full Text\n\nHealey MA, Shackford SR, Osler TM, et al.: Complications in surgical patients. Arch Surg. 2002; 137(5): 611–7; discussion 617–8. PubMed Abstract | Publisher Full Text\n\nKertai MD, Pal N, Palanca BJ, et al.: Association of perioperative risk factors and cumulative duration of low bispectral index with intermediate-term mortality after cardiac surgery in the B-Unaware Trial. Anesthesiology. 2010; 112(5): 1116–27. PubMed Abstract | Publisher Full Text\n\nKertai MD, Palanca BJ, Pal N, et al.: Bispectral index monitoring, duration of bispectral index below 45, patient risk factors, and intermediate-term mortality after noncardiac surgery in the B-Unaware Trial. Anesthesiology. 2011; 114(3): 545–56. PubMed Abstract | Publisher Full Text\n\nMonk TG, Saini V, Weldon BC, et al.: Anesthetic management and one-year mortality after noncardiac surgery. Anesth Analg. 2005; 100(1): 4–10. PubMed Abstract | Publisher Full Text\n\nNoordzij PG, Poldermans D, Schouten O, et al.: Postoperative mortality in The Netherlands: a population-based analysis of surgery-specific risk in adults. Anesthesiology. 2010; 112(5): 1105–15. PubMed Abstract | Publisher Full Text\n\nPearse RM, Moreno RP, Bauer P, et al.: Mortality after surgery in Europe: a 7 day cohort study. Lancet. 2012; 380(9847): 1059–65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStory DA, Leslie K, Myles PS, et al.: Complications and mortality in older surgical patients in Australia and New Zealand (the REASON study): a multicentre, prospective, observational study. Anesthesia. 2010; 65(10): 1022–30. PubMed Abstract | Publisher Full Text\n\nTurrentine FE, Wang H, Simpson VB, et al.: Surgical risk factors, morbidity, and mortality in elderly patients. J Am Coll Surg. 2006; 203(6): 865–77. PubMed Abstract | Publisher Full Text\n\nVisser BC, Keegan H, Martin M, et al.: Death after colectomy: it's later than we think. Arch Surg. 2009; 144(11): 1021–7. PubMed Abstract | Publisher Full Text\n\nBilimoria KY, Liu Y, Paruch JL, et al.: Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg. 2013; 217(5): 833–42.e1–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee TH, Marcantonio ER, Mangione CM, et al.: Derivation and prospective validation of a simple index for prediction of cardiac risk of major noncardiac surgery. Circulation. 1999; 100(10): 1043–9. PubMed Abstract | Publisher Full Text\n\nInstitute of Medicine (US) Committee on Quality of Health Care in America, Kohn LT, Corrigan JM, et al.: To Err is Human: Building a Safer Health System. Washington, D.C.: National Academy Press, 2000. PubMed Abstract | Publisher Full Text\n\nMakary MA, Daniel M: Medical error-the third leading cause of death in the US. BMJ. 2016; 353: i2139. PubMed Abstract | Publisher Full Text\n\nCabana MD, Rand CS, Powe NR, et al.: Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999; 282(15): 1458–65. PubMed Abstract | Publisher Full Text\n\nDemakis JG, Beauchamp C, Cull WL, et al.: Improving residents' compliance with standards of ambulatory care: results from the VA Cooperative Study on Computerized Reminders. JAMA. 2000; 284(11): 1411–6. PubMed Abstract | Publisher Full Text\n\nGrol R, Grimshaw J: From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003; 362(9391): 1225–30. PubMed Abstract | Publisher Full Text\n\nNeedham DM, Colantuoni E, Mendez-Tellez PA, et al.: Lung protective mechanical ventilation and two year survival in patients with acute lung injury: prospective cohort study. BMJ. 2012; 344: e2124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSteinman MA, Fischer MA, Shlipak MG, et al.: Clinician awareness of adherence to hypertension guidelines. Am J Med. 2004; 117(10): 747–54. PubMed Abstract | Publisher Full Text\n\nStiegler MP, Ruskin KJ: Decision-making and safety in anesthesiology. Curr Opin Anaesthesiol. 2012; 25(6): 724–9. PubMed Abstract\n\nStiegler MP, Tung A: Cognitive processes in anesthesiology decision making. Anesthesiology. 2014; 120(1): 204–17. PubMed Abstract | Publisher Full Text\n\nPROVE Network Investigators for the Clinical Trial Network of the European Society of Anaesthesiology, Hemmes SN, Gama de Abreu M, et al.: High versus low positive end-expiratory pressure during general anaesthesia for open abdominal surgery (PROVHILO trial): a multicentre randomised controlled trial. Lancet. 2014; 384(9942): 495–503. PubMed Abstract | Publisher Full Text\n\nAronson S, Stafford-Smith M, Phillips-Bute B, et al.: Intraoperative systolic blood pressure variability predicts 30-day mortality in aortocoronary bypass surgery patients. Anesthesiology. 2010; 113(2): 305–12. PubMed Abstract | Publisher Full Text\n\nAvidan MS, Jacobsohn E, Glick D, et al.: Prevention of intraoperative awareness in a high-risk surgical population. N Engl J Med. 2011; 365(7): 591–600. PubMed Abstract | Publisher Full Text\n\nAvidan MS, Zhang L, Burnside BA, et al.: Anesthesia awareness and the bispectral index. N Engl J Med. 2008; 358(11): 1097–108. PubMed Abstract | Publisher Full Text\n\nBehrends M, DePalma G, Sands L, et al.: Association between intraoperative blood transfusions and early postoperative delirium in older adults. J Am Geriatr Soc. 2013; 61(3): 365–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBernard AC, Davenport DL, Chang PK, et al.: Intraoperative transfusion of 1 U to 2 U packed red blood cells is associated with increased 30-day mortality, surgical-site infection, pneumonia, and sepsis in general surgery patients. J Am Coll Surg. 2009; 208(5): 931. PubMed Abstract | Publisher Full Text\n\nBiccard BM, Rodseth RN: What evidence is there for intraoperative predictors of perioperative cardiac outcomes? A systematic review. Perioper Med (Lond). 2013; 2(1): 14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBratzler DW, Dellinger EP, Olsen KM, et al.: Clinical practice guidelines for antimicrobial prophylaxis in surgery. Am J Health Syst Pharm. 2013; 70(3): 195–283. PubMed Abstract | Publisher Full Text\n\nBratzler DW, Houck PM, Surgical Infection Prevention Guideline Writers Workgroup: Antimicrobial prophylaxis for surgery: an advisory statement from the National Surgical Infection Prevention Project. Am J Surg. 2005; 189(4): 395–404. PubMed Abstract | Publisher Full Text\n\nde Almeida JP, Vincent JL, Galas FR, et al.: Transfusion requirements in surgical oncology patients: a prospective, randomized controlled trial. Anesthesiology. 2015; 122(1): 29–38. PubMed Abstract | Publisher Full Text\n\nFutier E, Constantin JM, Jaber S: Protective lung ventilation in operating room: a systematic review. Minerva Anestesiol. 2014; 80(6): 726–35. PubMed Abstract\n\nHebert PC, Wells G, Blajchman MA, et al.: A multicenter, randomized, controlled clinical trial of transfusion requirements in critical care. Transfusion Requirements in Critical Care Investigators, Canadian Critical Care Trials Group. N Engl J Med. 1999; 340(6): 409–17. PubMed Abstract | Publisher Full Text\n\nKurz A, Sessler DI, Lenhardt R: Perioperative normothermia to reduce the incidence of surgical-wound infection and shorten hospitalization. Study of Wound Infection and Temperature Group. N Engl J Med. 1996; 334(19): 1209–15. PubMed Abstract | Publisher Full Text\n\nKwon S, Thompson R, Dellinger P, et al.: Importance of perioperative glycemic control in general surgery: a report from the Surgical Care and Outcomes Assessment Program. Ann Surg. 2013; 257(1): 8–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMashour GA, Shanks A, Tremper KK, et al.: Prevention of intraoperative awareness with explicit recall in an unselected surgical population: a randomized comparative effectiveness trial. Anesthesiology. 2012; 117(4): 717–25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMashour GA, Tremper KK, Avidan MS: Protocol for the \"Michigan Awareness Control Study\": A prospective, randomized, controlled trial comparing electronic alerts based on bispectral index monitoring or minimum alveolar concentration for the prevention of intraoperative awareness. BMC Anesthesiol. 2009; 9: 7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWalsh M, Devereaux PJ, Garg AX, et al.: Relationship between intraoperative mean arterial pressure and clinical outcomes after noncardiac surgery: toward an empirical definition of hypotension. Anesthesiology. 2013; 119(3): 507–15. PubMed Abstract | Publisher Full Text\n\nYoung PY, Khadaroo RG: Surgical site infections. Surg Clin North Am. 2014; 94(6): 1245–64. PubMed Abstract | Publisher Full Text\n\nKooij FO, Klok T, Hollmann MW, et al.: Decision support increases guideline adherence for prescribing postoperative nausea and vomiting prophylaxis. Anesth Analg. 2008; 106(3): 893–8, table of contents. PubMed Abstract | Publisher Full Text\n\nMcEvoy MD, Hand WR, Stoll WD, et al.: Adherence to guidelines for the management of local anesthetic systemic toxicity is improved by an electronic decision support tool and designated \"Reader\". Reg Anesth Pain Med. 2014; 39(4): 299–305. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNair BG, Grunzweig K, Peterson GN, et al.: Intraoperative blood glucose management: impact of a real-time decision support system on adherence to institutional protocol. J Clin Monit Comput. 2016; 30(3): 301–12. PubMed Abstract | Publisher Full Text\n\nNair BG, Newman SF, Peterson GN, et al.: Feedback mechanisms including real-time electronic alerts to achieve near 100% timely prophylactic antibiotic administration in surgical cases. Anesth Analg. 2010; 111(5): 1293–300. PubMed Abstract | Publisher Full Text\n\nKheterpal S, Shanks A, Tremper KK: Impact of a Novel Multiparameter Decision Support System on Intraoperative Processes of Care and Postoperative Outcomes. Anesthesiology. 2018; 128(2): 272–82. PubMed Abstract | Publisher Full Text\n\nBreslow MJ, Rosenfeld BA, Doerfler M, et al.: Effect of a multiple-site intensive care unit telemedicine program on clinical and economic outcomes: an alternative paradigm for intensivist staffing. Crit Care Med. 2004; 32(1): 31–8. PubMed Abstract | Publisher Full Text\n\nHawkins HA, Lilly CM, Kaster DA, et al.: ICU Telemedicine Comanagement Methods and Length of Stay. Chest. 2016; 150(2): 314–9. PubMed Abstract | Publisher Full Text\n\nKahn JM, Le TQ, Barnato AE, et al.: ICU Telemedicine and Critical Care Mortality: A National Effectiveness Study. Med Care. 2016; 54(3): 319–25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLilly CM, Cody S, Zhao H, et al.: Hospital mortality, length of stay, and preventable complications among critically ill patients before and after tele-ICU reengineering of critical care processes. JAMA. 2011; 305(21): 2175–83. PubMed Abstract | Publisher Full Text\n\nYoung LB, Chan PS, Lu X, et al.: Impact of telemedicine intensive care unit coverage on patient outcomes: a systematic review and meta-analysis. Arch Intern Med. 2011; 171(6): 498–506. PubMed Abstract | Publisher Full Text\n\nMurray-Torres TM, Wallace F, Bollini M, et al.: Anesthesiology Control Tower: Feasibility Assessment to Support Translation (ACT-FAST)-a feasibility study protocol. Pilot Feasibility Stud. 2018; 4(1): 38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHornbæk K, Law ELC: Meta-analysis of correlations among usability measures. Proceedings of the SIGCHI conference on Human factors in computing systems. ACM. 2007; 617–626. Publisher Full Text\n\nPowell BJ, McMillen JC, Proctor EK, et al.: A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012; 69(2): 123–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCurran GM, Bauer M, Mittman B, et al.: Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012; 50(3): 217–26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nProctor E, Silmere H, Raghavan R, et al.: Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011; 38(2): 65–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFritz BA, Chen Y, Murray-Torres TM, et al.: Using machine learning techniques to develop forecasting algorithms for postoperative complications: protocol for a retrospective study. BMJ Open. 2018; 8(4): e020124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHelsten DL, Ben Abdallah A, Avidan MS, et al.: Methodologic Considerations for Collecting Patient-reported Outcomes from Unselected Surgical Patients. Anesthesiology. 2016; 125(3): 495–504. PubMed Abstract | Publisher Full Text\n\nAnesthesiologists ASo: Standards for basic anesthetic monitoring. 2015. (accessed 8/25/2016). Reference Source\n\nCommission TJ: Surgical Care Improvement Project Core Measure Set; Effective for Discharges January 1, 2014. 2014; accessed 8/25 2016. Reference Source\n\nCommission TJ: Specifications Manual for National Hospital Inpatient Quality Measures. 2016; accessed 8/25 2016. Reference Source\n\nAnesthesiologists ASo: Statement on the Surgical Care Improvement Project (SCIP). 2015; accessed 8/25 2016. Reference Source\n\nFritz B, Chen Y, Murray-Torres TM, et al.: Protocol for a retrospective study using machine learning techniques to develop forecasting algorithms for postoperative complications: the ACTFAST-2 study. BMJ Open. 2018.\n\nEdwards KE, Hagen SM, Hannam J, et al.: A randomized comparison between records made with an anesthesia information management system and by hand, and evaluation of the Hawthorne effect. Can J Anesth. 2013; 60(10): 990–7. PubMed Abstract | Publisher Full Text\n\nMcCambridge J, Witton J, Elbourne DR: Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. J Clin Epidemiol. 2014; 67(3): 267–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCarney R, Warner J, Iliffe S, et al.: The Hawthorne Effect: a randomised, controlled trial. BMC Med Res Methodol. 2007; 7: 30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchubert A, Eckhout GV, Ngo AL, et al.: Status of the anesthesia workforce in 2011: evolution during the last decade and future outlook. Anesth Analg. 2012; 115(2): 407–27. PubMed Abstract | Publisher Full Text\n\nGurusamy KS, Gluud C, Nikolova D, et al.: Assessment of risk of bias in randomized clinical trials in surgery. Br J Surg. 2009; 96(4): 342–9. PubMed Abstract | Publisher Full Text\n\nChan AW, Tetzlaff JM, Altman DG, et al.: SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013; 158(3): 200–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchulz KF, Moher D, Altman DG: CONSORT 2010 comments. Lancet. 2010; 376(9748): 1222–3. PubMed Abstract | Publisher Full Text\n\nTaichman DB, Sahni P, Pinborg A, et al.: Data Sharing Statements for Clinical Trials: A Requirement of the International Committee of Medical Journal Editors. JAMA. 2017; 317(24): 2491–2492. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "34272",
"date": "05 Jun 2018",
"name": "Leif Saager",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you very much for the opportunity to review this innovative and timely submission. The manuscript is eloquently written and the study protocol comprehensively described; our comments are therefore few and minor.\nIn this article the authors present a study protocol for a randomized control trial in the field of intraoperative clinical decision support. The authors propose to randomize 12,000 patients to either intraoperative clinical decision support or enhanced intraoperative clinical decision support by utilizing a novel Anesthesia Control Tower (ACT) concept. Throughout the article the authors thoroughly present their pragmatic study with adequate details and a thoughtful patient-centric approach. Their identification of the complexity of the anesthetic practice and cognitive requirements is well founded, and their reference to the ICU remote monitoring systems is established.\n\nOn page 3, paragraph 1, the authors state that “10-40% of patients will experience some sort of postoperative surgical complication”. The citations mostly refer to elderly and/or high-risk surgical patients. Perhaps the authors could consider adding a reference for a general surgical population.\n\nOn page 4, the authors state the ACT will function only on days with at least 2 anesthesia providers available. Could this introduce bias into the study as on OR days with high volume, or complex cases requiring lower staffing ratios, the availability of staff for the ACT would be less likely?\n\nOn page 7, paragraph 2, the authors state an anticipated transition in electronic health records. In our experience, implementation of a new record keeping system can increase cognitive load, documentation errors, and lags in data acquisition. Our concern would be a possible compromise of study data. Do the authors have a contingency/transition plan available?\n\nOn page 8, the authors base the sample size calculation on core temperature measurements. The rest of the manuscript is less specific as to the site of temperature measurement. Will only core temperatures be utilized in this study?\n\nOn page 9, paragraph 2, the authors propose an innovative educational curriculum. Would the authors consider providing more detail on the implementation and evaluation of this component?\n\nIn Table 2, the authors describe secondary outcomes. Would it be possible to add an appendix to provide definitions for these parameters or reference NSQIP/STS documents as the source of these definitions?\n\nIs the rationale for, and objectives of, the study clearly described? Yes\n\nIs the study design appropriate for the research question? Yes\n\nAre sufficient details of the methods provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": [
{
"c_id": "3833",
"date": "24 Aug 2018",
"name": "Teresa Murray-Torres",
"role": "Author Response",
"response": "Thank you for taking the time to review our manuscript and provide feedback. We have submitted a revised version of our protocol addressing the reviewer's comments. Our changes in response to the referee are as follows: On page 3, paragraph 1, the authors state that “10-40% of patients will experience some sort of postoperative surgical complication”. The citations mostly refer to elderly and/or high-risk surgical patients. Perhaps the authors could consider adding a reference for a general surgical population. We have updated this statistic to “5-40%,” including a reference examining NSQIP complication rates in patients undergoing orthopedic surgical procedures, primarily elective total joint procedures. On page 4, the authors state the ACT will function only on days with at least 2 anesthesia providers available. Could this introduce bias into the study as on OR days with high volume, or complex cases requiring lower staffing ratios, the availability of staff for the ACT would be less likely? We have attempted to minimize the risk of bias secondary to ACT staff availability by performing OR randomization each day with a 1:1 allocation. We anticipate that this will allow for any staffing variations to equally affect both the intervention and control groups to minimize bias. We have updated the manuscript to specifically address this point. On page 7, paragraph 2, the authors state an anticipated transition in electronic health records. In our experience, implementation of a new record keeping system can increase cognitive load, documentation errors, and lags in data acquisition. Our concern would be a possible compromise of study data. Do the authors have a contingency/transition plan available? Fortunately, the data required to evaluate the primary and secondary outcomes in this study is electronically populated from patient monitoring data (temperature) or autopopulated into the electronic medical record following measurement (glucose). Although we do not anticipate any significant difficulties with ensuring the integrity of the study data, we do plan to perform an analysis to confirm that there has been no significant compromise of study data. On page 8, the authors base the sample size calculation on core temperature measurements. The rest of the manuscript is less specific as to the site of temperature measurement. Will only core temperatures be utilized in this study? Yes, we plan to only utilize core temperature in our analysis of temperature as a primary outcome. This has been updated in the manuscript. On page 9, paragraph 2, the authors propose an innovative educational curriculum. Would the authors consider providing more detail on the implementation and evaluation of this component? At present time, we are still actively developing the educational curriculum for residents rotating through the ACT. The specific endpoints for the protocol assessing this substudy are not yet defined. In Table 2, the authors describe secondary outcomes. Would it be possible to add an appendix to provide definitions for these parameters or reference NSQIP/STS documents as the source of these definitions? We have added an appendix to define the postoperative surrogate outcomes for the study."
}
]
},
{
"id": "35259",
"date": "29 Jun 2018",
"name": "Morten H. Bestle",
"expertise": [
"Reviewer Expertise Clinical research in intensive care medicine"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for the opportunity to review this paper.\nIn manuscript the authors describe a pilot randomized controlled trial that aims to demonstrate the implementation and utility of the anesthesiology control tower (ACT) in improving adherence to best care practices when compared to enhanced usual care. The authors propose to randomize 12,000 patients over the study period, with approximately 6,000 patients in the control arm and 6,000 patients in the intervention arm. Clinicians grouped in the intervention arm will be provided with real-time feedback based on the available electronic resources. Primary and secondary outcomes will be compared to the control group.\n\nPage 3 paragraph 2: The authors state that some of the risks of perioperative adverse events may be a manifestation of either underlying patient pathology or the complexity of the surgical procedure itself. The authors could consider elaborating that statement in more details. How big is the proportion of underlying patient pathology and complex surgical procedures?\nPage 3 paragraph 2: The authors state that prior studies have documented that deviation from evidence-based standards of care is common, and that deviation results in poorer patient outcomes. Which outcomes has been the focus of prior studies?\nPage 3 paragraph 8: Why have you chosen these outcomes to be the primary outcomes?\nPage 4 paragraph 2: The authors mentions that only patients undergoing surgery between 7:00 AM and 4:00 PM Monday through Friday will be included. Have you considered there could be a difference between elective and acute surgery. Are clinicians more prone to follow clinical guidelines at day time compared to night time?\n\nIs the rationale for, and objectives of, the study clearly described? Yes\n\nIs the study design appropriate for the research question? Yes\n\nAre sufficient details of the methods provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": [
{
"c_id": "3832",
"date": "24 Aug 2018",
"name": "Teresa Murray-Torres",
"role": "Author Response",
"response": "Thank you for taking the time to review our manuscript and provide feedback. We have submitted a revised version of our protocol addressing the reviewer's comments. Our changes in response to the referee are as follows: Page 3 paragraph 2: The authors state that some of the risks of perioperative adverse events may be a manifestation of either underlying patient pathology or the complexity of the surgical procedure itself. The authors could consider elaborating that statement in more details. How big is the proportion of underlying patient pathology and complex surgical procedures? We have expanded this sentence to highlight the development of complex surgical risk calculators to evaluate perioperative risk using both patient pathology and the surgical procedure. Page 3 paragraph 2: The authors state that prior studies have documented that deviation from evidence-based standards of care is common, and that deviation results in poorer patient outcomes. Which outcomes has been the focus of prior studies? We have updated this section to highlight that deviation from evidence-based standards of care is ubiquitous across a variety of health care settings and may be associated with an increase in a number of adverse patient outcomes, including surgical site infection, pneumonia, and mortality. Page 3 paragraph 8: Why have you chosen these outcomes to be the primary outcomes? These outcomes were selected because they are routinely and reliably tracked in the electronic medical record and optimal perioperative management of temperature and blood glucose is known to influence clinical outcome. We have added this information to the manuscript. Page 4 paragraph 2: The authors mentions that only patients undergoing surgery between 7:00 AM and 4:00 PM Monday through Friday will be included. Have you considered there could be a difference between elective and acute surgery. Are clinicians more prone to follow clinical guidelines at day time compared to night time? We do recognize that this is a limitation of our current study, but we have attempted to account for any variation in guideline compliance during off-hours by equally applying time exclusion criteria to both our control and intervention ORs. Additionally, we have designated that patients having a surgical procedure with >50% of the operative time occurring outside of ACT hours will be excluded from analysis. Evaluating variations in compliance with perioperative guidelines outside of normal working hours is an interesting proposal, and may be considered as part of a future expansion of the ACT concept."
}
]
}
] | 1
|
https://f1000research.com/articles/7-623
|
https://f1000research.com/articles/7-1001/v1
|
04 Jul 18
|
{
"type": "Research Article",
"title": "What is a predatory journal? A scoping review",
"authors": [
"Kelly D. Cobey",
"Manoj M Lalu",
"Becky Skidmore",
"Nadera Ahmadzai",
"Agnes Grudniewicz",
"David Moher",
"Manoj M Lalu",
"Becky Skidmore",
"Nadera Ahmadzai",
"Agnes Grudniewicz"
],
"abstract": "Background: There is no standardized definition of what a predatory journal is, nor have the characteristics of these journals been delineated or agreed upon. In order to study the phenomenon precisely a definition of predatory journals is needed. The objective of this scoping review is to summarize the literature on predatory journals, describe its epidemiological characteristics, and to extract empirical descriptions of potential characteristics of predatory journals. Methods: We searched five bibliographic databases: Ovid MEDLINE, Embase Classic + Embase, ERIC, and PsycINFO, and Web of Science on January 2nd, 2018. A related grey literature search was conducted March 27th, 2018. Eligible studies were those published in English after 2012 that discuss predatory journals. Titles and abstracts of records obtained were screened. We extracted epidemiological characteristics from all search records discussing predatory journals. Subsequently, we extracted statements from the empirical studies describing empirically derived characteristics of predatory journals. These characteristics were then categorized and thematically grouped.\n\nResults: 920 records were obtained from the search. 344 of these records met our inclusion criteria. The majority of these records took the form of commentaries, viewpoints, letters, or editorials (78.44%), and just 38 records were empirical studies that reported empirically derived characteristics of predatory journals. We extracted 109 unique characteristics from these 38 studies, which we subsequently thematically grouped into six categories: journal operations, article, editorial and peer review, communication, article processing charges, and dissemination, indexing and archiving, and five descriptors.\n\nConclusions: This work identified a corpus of potential characteristics of predatory journals. Limitations of the work include our restriction to English language articles, and the fact that the methodological quality of articles included in our extraction was not assessed. These results will be provided to attendees at a stakeholder meeting seeking to develop a standardized definition for what constitutes a predatory journal.",
"keywords": [
"scholarly publishing",
"open access",
"predatory journals",
"predatory publishers",
"illegitimate journals",
"peer review",
"reporting quality"
],
"content": "Introduction\n\nThe term ‘predatory journal’ was coined less than a decade ago by Jeffrey Beall1. Predatory journals have since become a hot topic in the scholarly publishing landscape. A substantial body of literature discussing the problems created by predatory journals, and potential solutions to stop the flow of manuscripts to these journals, has rapidly accumulated2–6. Despite increased attention in the literature and related educational campaigns7, the number of predatory journals, and the number of articles these journals publish, continues to increase rapidly8. Some researchers may be tricked into submitting to predatory journals9, while others may do so dubiously to pad their curriculum vitae for career advancement10.\n\nOne factor that may be contributing to the rise of predatory journals is that there is currently no agreed upon definition of what constitutes a predatory journal. The characteristics of predatory journals have not been delineated, standardized, nor broadly accepted. In the absence of a clear definition, it is difficult for stakeholders such as funders and research institutions to establish explicit policies to safeguard work they support from being submitted to and published in predatory journals. Likewise, if characteristics of predatory journals have not been delineated and accepted, it is difficult to take an evidence-based approach towards educating researchers on how to avoid them. Establishing a consensus definition has the potential to inform policy and to significantly strengthen educational initiatives such as Think, Check, Submit7.\n\nThe challenge of defining predatory journals has been recognized11, and recent discussion in the literature highlights a variety of potential definitions. Early definitions by Beall describe predatory journals as outlets “which publish counterfeit journals to exploit the open-access model in which the author pays” and journals that were “dishonest and lack transparency”1. Others have since suggested that we move away from using the term ‘predatory journal’, in part because the term neglects to adequately capture journals that fail to meet expected professional publishing standards, but do not intentionally act deceptively12–15. This latter view suggests that the rise of so-called predatory journals is not strictly associated with dubious journal operations that use the open-access publishing model (e.g., publishing virtually anything to earn an article processing charge (APC)), but represents a wider spectrum of problems. For example, there is the conundrum that some journals hailing from the global south may not have the knowledge, resources, or infrastructure to meet best practices in publishing. Devaluing or black-listing such journals may be problematic as they serve an important function in ensuring the dissemination of research on topics of regional significance.\n\nOther terms to denote predatory journals such as “illegitimate journals9,16”, “deceptive journals15”, “dark” journals17, and “journals operating in bad faith13” have appeared in the literature, but like the term “predatory journal” they are reductionist11 and may not adequately reflect the varied spectrum of quality present in the scholarly publishing landscape and the distinction between low-quality and intentionally dubious journals. These terms have also not garnered widespread acceptance, and it is possible that the diversity in nomenclature leads to confusion for researchers and other stakeholders.\n\nHere, we seek to address the question “what is a predatory journal?” by conducting a scoping review18,19 of the literature. Our aims are twofold. Firstly, in an effort to provide an overview of the literature on the topic, we seek to describe epidemiological characteristics of all records discussing predatory journals. Secondly, we seek to synthesize the existing empirically derived characteristics of predatory journals. The impetus for this work is to establish a list of evidence-based traits and characteristics of predatory journals. This corpus of possible characteristic of predatory journals will be provided to global stakeholders at a meeting to generate a consensus definition of predatory journals.\n\n\nMethods\n\nPrior to initiating this study, we drafted a protocol that was posted on the Open Science Framework prior to data analysis (please see: https://osf.io/gfmwr/). We did not register our review with PROSPERO as the registry does not accept scoping reviews. Other than the protocol deviations described below, the authors affirm that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that discrepancies from the study as planned have been explained. We briefly re-state our study methods here. Large sections of the methods described here are taken directly from the original protocol. We used the PRISMA statement20 to guide our reporting of this scoping review.\n\nFor our full search strategy please see Supplementary File 1. An experienced medical information specialist (BS) developed and tested the search strategy using an iterative process in consultation with the review team. Another senior information specialist peer reviewed the strategy prior to execution using the PRESS Checklist21. We searched a range of databases in order to achieve cross-disciplinary coverage. These included: Web of Science and four Ovid databases: Ovid MEDLINE®, including Epub Ahead of Print and In-Process & Other Non-Indexed Citations, Embase Classic + Embase, ERIC, and PsycINFO. We performed all searches on January 2, 2018.\n\nThere were no suitable controlled vocabulary terms for this topic in any of the databases. We used various free-text phrases to search, including multiple variations of root words related to publishing (e.g., edit, journal, publication) and predatory practices (e.g., bogus, exploit, sham). We adjusted vocabulary and syntax across the databases. We limited results to the publication years 2012 to the present, since 2012 is the year in which the term “predatory journal” reached the mainstream literature1.\n\nWe also searched abstracts of relevant conferences (e.g., The Lancet series and conference “Increasing Value, Reducing Waste”, International Congresses on Peer Review and Scientific Publication) and Google Scholar to identify grey literature. For the purposes of our Google Scholar search, we conducted an advanced search (on March 27, 2018) using the keywords: predatory, journal, and publisher. We restricted this search to content published from 2012 onward. A single reviewer (KDC) reviewed the first 100 hits and extracted all potentially relevant literature encountered for review, based on title. We did not review content from file sources that were from mainstream publishers (e.g., Sage, BMJ, Wiley), as we expected these to be captured in our broader search strategy.\n\nOur study population included articles, reports, and other digital documents that discuss, characterize, or describe predatory journals. We included all study designs from any discipline captured by our search that were reported in English. This included experimental and observational research, as well as commentaries, editorials and narrative summaries in our epidemiological extraction. For extraction of characteristics of predatory journals we restricted our sample to studies that specifically provided empirically derived characteristics of predatory journals.\n\nData extraction forms were developed and piloted prior to data extraction. Details of the forms used are provided in the Open Science Framework, see here: https://osf.io/p5y2k/. We first screened titles and abstracts against the inclusion criteria. We verified full-text articles met the inclusion criteria and we extracted information on corresponding author name, corresponding author country, year of publication (we selected the most recent date stated), study design (as assessed by the reviewers), and journal name. We also extracted whether or not the paper provided a definition of a predatory journal. This was coded as yes/no and included both explicit definitions (e.g. “Predatory journals are…”) as well as implicit definitions.\n\nWhen extracting data, we restricted our sample of articles to those that provided a definition of predatory journals, or described characteristics of predatory journals, based on empirical work (i.e., not opinion, not definitions which referenced previous work). Specifically, we restricted our sample of articles to those classed as having an empirical study design and then re-vetted each article to ensure that the study addressed defining predatory journals or their characteristics. For those articles included, we extracted sections of text statements describing the traits/characteristics of predatory journals. Extraction was done by a single reviewer, with verification conducted by a second reviewer. Conflicts were resolved via consensus. In instances where an empirically derived trait/characteristic of predatory journals was mentioned in several sections of the article, we extracted only a single representative statement.\n\nOur data analysis involved both quantitative (i.e., frequencies and percentages) and qualitative (i.e., thematic analysis) methods. First, a list of potential characteristics of predatory journals was generated collaboratively by the two reviewers who conducted data extraction (KDC, NA). Subsequently, each of the statements describing characteristics of predatory journals that were extracted from the included articles were categorized using the list generated. During the categorization of the extracted statements, if a statement did not apply to a category already on the list, a new category was added. Where duplicate statements were inadvertently extracted from a single record we categorized these only once. During the categorization and grouping process, details on the specific wording of statements from specific included records were not retained (i.e., our categories and our themes do not preserve the original wording of the extracted text).\n\nSubsequently, in line with Galipeau and colleagues22, after this initial categorization, we collated overlapping or duplicate categories into themes. Then, two reviewers (KDC, AG) evaluated recurring themes in the work to synthesize the data. A coding framework was iteratively developed by KDC and AG by coding each characteristic statement independently and inductively (i.e., without using a theory or framework a priori). The two reviewers met to discuss these codes, and through consensus decided on the final themes and their definitions. The reviewers then went back to the data and recoded with the agreed-upon themes. Lastly, the reviewers met to compare assignment of themes to statements. Discrepancies were resolved by consensus. Two types of themes emerged: categories (i.e., features of predatory journals to which the statements referred) and descriptors (i.e., statements which described these features, usually with either a positive or negative value).\n\nWe conducted data extraction of epidemiological characteristics of papers discussing predatory journals in duplicate. The original protocol indicated this would be done by a single reviewer with verification. The original protocol stated we would extract information on the discipline of the journals publishing our articles included for epidemiological data extraction (as defined by MEDLINE). Instead, we used SCIMAGOJR (SJR)23 to determine journal subject areas post-hoc and only extracted this information for the included empirical articles describing empirically derived characteristics of predatory journals. For included articles, post-hoc, we decided to extract information on whether or not the record reported on funding.\n\n\nResults\n\nPlease see Figure 1 for record and article flow during the review. The original search captured 920 records. We excluded 19 records from initial screening because they were not in English (N = 13), we could not access a full-text document (N = 5; of which one was behind a paywall at a cost of greater than $25 CAD), or the reference referred to a conference proceeding containing multiple documents (N = 1).\n\nWe screened a total of 901 title and abstract records obtained from the search strategy. Of these, 402 were included for full-text screening. 499 records were excluded for not meeting our study inclusion criteria. After full-text screening of the 402 studies, 334 were determined to have full texts and to discuss predatory journals. The remaining 68 records were excluded because: they were not about predatory journals (N = 36), did not have full texts (N = 19), were abstracts (N = 12), or were published in a language other than English (N = 1). The 334 articles included for epidemiological data extraction were published between 2012 and 2018 with corresponding authors from 43 countries. The number of publications mentioning predatory journals increased each year from 2012 to 2017 (See Table 1). The vast majority of these publications took the form of commentaries, viewpoints, letters, or editorials (262/334; 78.44%).\n\ni 61 articles did not clearly state the corresponding authors’ nationality, and 1 stated they wished to remain anonymous\n\nii 1 article did not clearly state the corresponding author’s nationality\n\niii Note this is truncated data for 2018 since we conducted out search on January 2nd, 2018\n\nOf the articles discussing predatory journals, only 38 specifically described a study that reported empirically derived characteristics or traits of predatory journals. These studies were published between 2014 and 2018 and produced by corresponding authors from 19 countries. The majority of these included studies were observational studies (26/38; 68.4%) (See Table 1 and Table 2).\n\nFive additional records obtained from the grey literature search were excluded. These records were either duplicates of studies captured in the main search or they did not provide empirically derived characteristics of predatory journals.\n\nThe list generated to categorize the extracted statements describing characteristics of predatory journals had 109 categories. Two types of themes were identified using qualitative thematic analysis: categories and descriptors. Each statement addressed at least one of the following categories: journal operations, article, editorial and peer review, communication, article processing charges, and dissemination, indexing, and archiving. Within these categories, statements used descriptors including: deceptive or lacking transparency, unethical research or publication practices, persuasive language, poor quality standards, or high quality standards. Statements that did not include a descriptive component (i.e., were neutral) were coded as not applicable (See Table 3 for themes and definitions). Statements addressing more than one category or using more than one descriptor were coded multiple times. Below we briefly summarize the qualitative findings by category (For full results, see Table 4).\n\nJournal Operations. Predatory journal operations were described as: being deceptive or lacking transparency (19 statements), demonstrating poor quality standards (17 statements), demonstrating unethical research or publication practices (14 statements), using persuasive language (two statements). Five statements were neutral or non-descriptive. The most common characteristics of the journal operations category were “Journals display low levels of transparency, integrity, poor quality practices of journal operations” (N=14 articles); “Contact details of publisher absent or not easily verified” (N=11 articles); and “Journals are published by/in predominantly by authors from specific countries” (N=10 articles).\n\nArticle. Articles in predatory journals were described as: demonstrating poor quality standards (six statements), demonstrating high quality standards (two statements), being deceptive or lacking transparency (three statements), and demonstrating unethical research of publication practices (three statements). Four statements were neutral or non-descriptive. The most common characteristics of the article category were: “Journals are published by/in predominantly by authors from specific countries” (N=10 articles); “Quality of articles rated as poor” (N=5 articles); and “Articles are poorly cited” (N=5 articles).\n\nEditorial and Peer Review. The editorial and peer review process was described as: demonstrating unethical or research practices (eight statements), being deceptive or lacking transparency (seven statements), demonstrating poor quality standards (five statements), demonstrating high quality standards (two statements), and using persuasive language (one statement). Two statements were neutral or non-descriptive. The most common characteristics of the editorial and peer review category were: “Journals conduct poor quality peer review” (N=8 articles) and “Journals have short peer review times”; “Editorial board is not stated or incomplete”; “Editorial broad lacks legitimacy (appointed without knowledge, wrong skillset)” (N=7 articles each).\n\nCommunication. Communication by predatory journals was described as: using persuasive language (12 statements), demonstrating poor quality standards (four statements), being deceptive or lacking transparency (four statements), and demonstrating high quality standards (one statement). All communication statements were descriptive. The most common characteristic of the communications category was: “Journals solicit papers via aggressive e-mail tactics” (N=13 articles).\n\nArticle Processing Charges. Article processing charges in predatory journals were described as: being deceptive or lacking transparency (three statements), using persuasive language (two statements), demonstrating poor quality standards (one statement), demonstrating unethical research or publication practices (one statement), and demonstrating high quality standards (one statement). Two statements were neutral or non-descriptive. The most common characteristics of the article processing charges category were: “APCs are lower than at legitimate journals”; “Journal does not specify APCs”; and “Journal has hidden APCs or hidden information on APCs” (N=9 articles each).\n\nDissemination, Indexing, and Archiving. Dissemination, indexing, and archiving were described as: demonstrating poor quality standards (five statements), demonstrating unethical research or publication practices (one statement), and as being deceptive or lacking transparency (one statement). Seven statements were neutral or non-descriptive. The most common characteristics of the dissemination, indexing, and archiving category were: “Journals state they are open access” (N=11 articles); “Journal may be listed in DOAJ” (N=8 articles); and “Journals are not indexed” (N=7 articles).\n\n\nDiscussion\n\nThis scoping review identified 334 articles mentioning predatory journals, with corresponding authors from more than 40 countries. The trajectory of articles on this topic is increasing rapidly. As an example, our search captured five articles from 2012 and 140 articles from 2017. The majority of articles captured took the form of a commentary, editorial or letter; just 38 had relevant empirically derived characteristics of predatory journals. One possibility for why there is little empirical work on this topic may be that most funding agencies have not set aside funding for journalology or a related field of enquiry–research on research. There are recent exceptions to this24, but in general such funds are not widely available. Of the 38 studies from which we extracted data, post-hoc we examined the percentage that reported funding, and found that just 13.16% (5/38) did, 21.05% (8/38) did not, and 65.79% (25/38) did not report information on funding. Even among the five studies that reported funding, several of these were not project funding specific to the research, but rather broader university chair or fellowship support.\n\nA total of 109 unique characteristics were extracted from the 38 empirical articles. When examining these unique characteristics some clear contrasts emerge. For example, we extracted the characteristic “Journal APCs clearly stated” (N = 4 articles) as well as the characteristics “Journal does not specify APCs” (N = 9 articles) and “Journal has hidden APCs or hidden information on APCs” (N = 9 articles). Potential inconsistencies of the importance of epidemiological characteristics will make it difficult to define predatory journals. Without a (consensus) definition it will be difficult to study the construct in a meaningful manner. It also makes policy initiatives and educational outreach imprecise and potentially less effective.\n\nWe believe a cogent next move is to invite a broad spectrum of stakeholders to a summit. Possible objectives could be to develop a consensus definition of a predatory journal, discuss how best to examine the longitudinal impact of predatory journals, and develop collaborative policy and educational outreach to minimize the impact of predatory publishers on the research community. As a starting point for defining predatory journals, those involved in a global stakeholder meeting to establish a definition for predatory journals may wish to exclude all characteristics that are common to legitimate journals. Further, one could exclude all characteristics that are conflicting, or which directly oppose one another. Another fruitful approach may be to focus on characteristics that can easily be audited to determine if journals do or do not meet the expected standards.\n\nThe unique characteristics we extracted were thematically grouped into six categories and five descriptors. Although we did identify one positive descriptor, high quality standards, the majority of descriptors were negative. Most categories (all but ‘Communication’) also included neutral or non-descriptive statements. The presence of both positive and neutral descriptors points to an overlap between characteristics that describe predatory journals and those that are viewed as ‘legitimate’, further emphasizing the challenges in defining predatory journals. The category with the most statements was ‘Journal Operations’ with 19 statements describing operations as deceptive or lacking transparency. The ‘Communication’ category had the most statements described as persuasive (11 statements), highlighting the targeted language predatory journals may use to convince the reader toward a certain action. Unethical or unprofessional publication practices described statements in all but the ‘Communication’ category and were most frequent in ‘Journal Operations’ and ‘Editorial and Peer Review’. These findings point to issues of great concern in research and publishing and an urgency to develop interventions and education to protect researchers, funders, and knowledge users.\n\nThere are a number of relevant limitations of this work that should be acknowledged. Firstly, while we endeavoured to ensure our systematic search and grey literature appraisal was comprehensive, it is possible that we missed some relevant documents that would have contributed additional empirically derived characteristics of predatory journals. As an example, several authors of this manuscript recently published a paper containing relevant empirical data and predatory characteristics2; however, because this work was published in a commentary format, which did not include an abstract or use the search terms in the article title, it was not picked up in our search. Indeed, part of the challenge of systematically searching on this topic is the lack of agreement and diversity of terms used to describe predatory journals. Further, reviewers deciding which articles to include based on our inclusion criteria had to make judgements on study designs and methods used. Due to inconsistent reporting and terminology, this was not always straightforward and may have resulted in inadvertent exclusions. Secondly, in keeping with accepted scoping review methodology, we did not appraise the methodological quality of the articles that were included in our extraction. This means that the characteristics extracted have not been considered in context to the study design or methodological rigour of the work. In addition, we only extracted definitions from empirical studies describing characteristics of predatory journals. It is possible that further characteristics would have been included in our results if non-empirical research articles were not excluded. We chose to exclude these types of articles as they are more likely to be based on opinion or individual experience rather than evidence. Finally, we limited our study to English articles. It is possible that work published in other languages may have provided additional characteristics of predatory journals.\n\nReaching a consensus on what defines predatory journals, and what features reflect these, may be particularly useful to stakeholders (e.g., funders, research institutions) with a goal of establishing a list of vetted journals to recommend to their researchers. Such lists could be updated annually. Lists which attempt to curate predatory journals rather than legitimate journals are unlikely to achieve success given the reactive nature of this type of curation and the issue that new journals cannot easily be systematically discovered for evaluation25. The development and use of digital technologies to provide information about journal publication practices (e.g., membership in the Committee on Publication Ethics26, listing in the Directory of Open Access Journals27) may also prove to be a fruitful approach in reducing researchers’ submissions to predatory journals; empowering authors with knowledge is an important step in decision-making. Currently, researchers receive little education or support about navigating journal selection and submission processes. We envision a plug-in tool that researchers could click to get immediate feedback about a journal page they are visiting and whether it has characteristics of predatory journals. This feedback could provide them with the relevant information to determine if the journal suits their needs and/or meets any policy requirements to which they must adhere (e.g., digital preservation, indexing).\n\n\nData availability\n\nStudy data and tables are available on the Open Science Framework, see: https://osf.io/4zm3t/.\n\nData are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors declared that no grants were involved in supporting this work. DM is funded by a University Research Chair. MML is supported by The Ottawa Hospital Anesthesia Alternate Funds Association and the Scholarship Protected Time Program, Department of Anesthesiology and Pain Medicine, uOttawa.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are grateful to Raymond Daniel (Knowledge Synthesis Group, Ottawa Hospital Research Institute) who assisted with the acquisition and import of study files into the DSR platform.\n\n\nSupplementary material\n\nSupplementary file 1: Search Strategy.\n\nClick here to access the data.\n\nSupplementary file 2: Full citations of included articles.\n\nClick here to access the data.\n\n\nReferences\n\nBeall J: Predatory publishers are corrupting open access. Nature. 2012; 489(7415): 179. PubMed Abstract | Publisher Full Text\n\nMoher D, Shamseer L, Cobey KD, et al.: Stop this waste of people, animals and money. Nature. 2017; 549(7670): 23–5. PubMed Abstract | Publisher Full Text\n\nLalu M, Shamseer L, Cobey KD, et al.: How stakeholders can respond to the rise of predatory journals. Nat Hum Behav. 2017; 1: 852–5. Publisher Full Text\n\nClark J, Smith R: Firm action needed on predatory journals. BMJ. 2015; 350(1): h210. PubMed Abstract | Publisher Full Text\n\nBartholomew RE: Science for sale: the rise of predatory journals. J R Soc Med. 2014; 107(10): 384–385. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmart P: Predatory journals and researcher needs. Learn Publ. 2017; 30(2): 103–5. Publisher Full Text\n\nThink, Check, Submit. Reference Source\n\nShen C, Björk B: 'Predatory' open access: a longitudinal study of article volumes and market characteristics. BMC Med. 2015; 13: 230. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCobey K: Illegitimate journals scam even senior scientists. Nature. 2017; 549(7670): 7. PubMed Abstract | Publisher Full Text\n\nKolata G: Many Academics Are Eager to Publish in Worthless Journals. New York Times. Reference Source\n\nBerger M: Everything you ever wanted to know about predatory publishing but were afraid to ask. ACRL. 2017; 206–7. Reference Source\n\nWager E: Why we should worry less about predatory publishers and more about the quality of research and training at our academic institutions. J Epidemiol. 2017; 27(3): 87–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnderson R: Should we retire the term “predatory publishing”? Scholarly Kitchen. Accessed March 29, 2018. 2015. Reference Source\n\nShamseer L, Moher D: Thirteen ways to spot a “predatory journal” (and why we shouldn’t call them that). Times Higher Education. Accessed November 15, 2017. 2017. Reference Source\n\nEriksson S, Helgesson G: Time to stop talking about “predatory journals.” Learn Publ. 2018; 31(2): 181–3. Publisher Full Text\n\nMoher D, Moher E: Stop Predatory Publishers Now: Act Collaboratively. Ann Intern Med. 2016; 164(9): 616–7. PubMed Abstract | Publisher Full Text\n\nButler D: Investigating journals: The dark side of publishing. Nature. 2013; 495(7442): 433–5. PubMed Abstract | Publisher Full Text\n\nArksey H, O'Malley L: Scoping studies: Towards a methodological framework. Int J Soc Res Methodol. 2005; 8(1): 19–32. Publisher Full Text\n\nLevac D, Colquhoun H, O'Brien KK: Scoping studies: advancing the methodology. Implement Sci. 2010; 5: 69. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoher D, Liberati A, Tetzlaff J, et al.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009; 6(7): e1000097. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcgowan J, Sampson M, Salzwedel DM, et al.: PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016; 75: 40–46. PubMed Abstract | Publisher Full Text\n\nGalipeau J, Barbour V, Baskin P, et al.: A scoping review of competencies for scientific editors of biomedical journals. BMC Med. 2016; 14(1): 16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScimagojr. Reference Source\n\nMatthews D: Netherlands to survey every researchers on misconduct. Times Higher Education. 2016. Reference Source\n\nPatwardhan B, Nagarkar S, Gadre SR, et al.: A critical analysis of the ‘UGC-approved list of journals’ Curr Sci. 2018; 114(6): 1299–1303. Publisher Full Text\n\nCommittee on Publication Ethics (COPE). Reference Source\n\nDirectory of Open Access Journals. Reference Source"
}
|
[
{
"id": "35740",
"date": "01 Aug 2018",
"name": "Monica Berger",
"expertise": [
"Reviewer Expertise My knowledge as a scholarly communications librarian who has devoted considerable effort to writing about the topic at hand is extensive but I am not a trained researcher in either the sciences or social sciences so my ability to genuinely judge the methodology is limited."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article represents a unique contribution to what has been written on this topic.\n\nAlthough the core readership for F1000Research consists of scientists, I am sure this article will be read by many non-scientists and many of my suggestions to the authors relate to this point.\nA general comment: the authors should note that most of the data used for empirical studies of predatory publishing is drawn from Beall's List and Beall's List was and is still controversial [the authors' discussion of Beall's List under Introduction is balanced and articulate]. Essentially, any empirical study of predatory publishing is based on one or two sources of data: Beall's List and/or email solicitations which lead to journals and their publisher websites. This should be made explicit.\nThe data that underlies much of the literature is very fuzzy and subjective. Without cross-checking publisher and journal data (e.g. many predatory publishers claim inclusion in DOAJ and this data point is particularly sticky), and probing the content, the underlying literature is limited. Moher, David et al's1 study seems to be one of the only studies to examine content and evaluates the methodological design and research protocols of articles but, as the authors note, it gets excluded because of its publication in commentary format!\nThe overall quantification of the literature differentiating empirical vs. editorial is extremely helpful.\nI found the raw data of characteristics pretty overwhelming and I wonder if the authors could somehow aggregate or otherwise organize the information in a way that makes it easier to scan. I recognize that they have summarized their data in the body of the article.\nThe conclusions clearly address the limitations of the study but what I think would be most important is teasing out where the data came from: publisher emails leading to publisher websites, and/or Beall's List leading to journals and publisher websites. Both are imperfect sources.\nI would like to see this data used again with more aggregation. I recognize that scoping reviews are meant to be fast so this article's data could be used for further research.\nSpecific comments: Introduction: Agreed that some journals from the Global South provide important regional research but the authors should note that many of these Global South journals market themselves as \"international\" or \"global\" and do not focus on regional research because of a desire to cast a wide net. Legitimate, amateurish journals deemed as predatory from this group actually would be more likely to have a scope that is regional and specific as opposed to the multidisciplinary scope of many predatory publishers.\nThe authors should explain far more explicitly what a scoping review is and its purpose. Non-biomedical readers will be unfamiliar with this type of methodology/article.\n\nI also am not entirely sure about the use of the word \"epidemiological\" in terms of discussing the topic at hand: non-biomedical readers may be unsure what exactly is meant.\nLastly, as much as it is very helpful to identify characteristics of predatory journals as drawn from the literature, it seems somewhat positivist to use this very limited body of literature which is by its heavy use of Beall's List data as a means to \"generate a consensus definition of predatory journals.\" Until there is more qualitative research and more multidisciplinary and longitudinal research as was done by Shen and Bjork, there are lacunae in the research literature. The recent articles based on the research by this team is, groundbreaking but largely limits scope to biomedical literature.\nScreening and data extraction The use of implicit and explicit definitions is very important and valuable.\nSearch strategy It is possible that some research from librarians and information science scholars might have been missed. There is also some concern that if the articles are open access, they may have not been indexed in traditional databases. This concern relates to the Data Analysis section as well since newer and smaller open access journals may not have a Journal Impact Factor and be excluded from SCIMAGO.\nMapping the data into emergent themes Under the descriptor \"persuasive language,\" the language of predatory journals targets authors and not readers. This should be explained.\nWhat is somewhat confusing to me is separating characteristics in the literature based on the authors' perceptions or evaluations of the journals and publishers versus the actual data drawn from the journals and publishers' emails, journals, articles, and websites. Whether or not the author of the underlying articles performed cross-checking is also important.\nTable 4 Characteristics The characteristic JOURNALS HAVE SHORT PEER REVIEW TIMES isn't mapped to a descriptor but it is a very most important common characteristic of publisher appeals to authors. It typically maps to Poor Quality Standards although not in an absolute manner since obviously large, quality journals can also have quick turnaround. It is unclear to me if because this characteristic lacks a descriptor, it may lose weight in the analysis. I note that JOURNALS HAVE SHORT/RAPID PUBLICATION TIMES is also a NA descriptor. These two facets are closely related.\nARTICLE SUBMISSION OCCURS VIA EMAIL this may be a signal of poor standards but is often more a reflection of low budgets and the many amateurish journals that have been lumped into Beall's List.\nJOURNALS DO NOT CONTAIN ANY ARTICLES the high number of predatory journals without articles is a very important data point that should be emphasized.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3899",
"date": "23 Aug 2018",
"name": "David Moher",
"role": "Author Response F1000Research Advisory Board Member",
"response": "We thank Monica Berger for her thoughtful peer review of our manuscript. We have made revisions throughout (version 2): We have further indicated the limitations of Beall’s lists for this type of research; We have noted the global south issue of journals using “international” or “global” in their titles; we have provided some clarity as to scoping reviews; i It is possible we’ve missed some relevant literature from our review (as is the potential in any review exercise) although we believe in its current form it is both broad and multidisciplinary. As a follow-up exercise we will reach out to library/expert listservs related to this field of enquiry; We agree that the two statements without a descriptor are important, however, the length of time for a peer-review or publication cannot be classified as either a positive or negative statement and hence were not given a descriptor term. While it could be mapped to Poor Quality Standards, we cannot assume that a short peer-review time is indicative of poor quality."
},
{
"c_id": "3900",
"date": "23 Aug 2018",
"name": "David Moher",
"role": "Author Response F1000Research Advisory Board Member",
"response": "In response to Valerie Ann Matarese’s comment we have changed two words in the introduction (version 2). Ross Mounce is misinformed. This is not a “literature review of opinion, and as such, one wonders what the value of the exercise is.”. As stated in the screening and data extortion section of the Methods of this scoping review (version 1) “we restricted our sample of articles to those that provided a definition of predatory journals, or described characteristics of predatory journals, based on empirical work (i.e., not opinion, not definitions which referenced previous work).”. We thank Edgardo Rolla for his comments on our scoping review (version 1)."
}
]
},
{
"id": "36292",
"date": "13 Aug 2018",
"name": "Johann Mouton",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper ultimately promises more than what it delivers. It presents the results of an analysis which has resulted in a set of characteristics of predatory journals derived from a scoping review of recent studies. However, the final discussion section is extremely disappointing. There is no attempt by the authors to add much value to the rather fragmented results found through the review. Part of the problem is that the characteristics listed are treated as equally weighted. Most of the authors who have written on the phenomenon of predatory journals in recent years have attempted to end up with a set of fairly authoritative and even 'objective' criteria that would by themselves be sufficient to classify a journal as predatory. Some of these characteristics would include referencing fake indexing, fake impact metrics, not being indexed in the DOAJ's and a few more. In order to get to a 'consensus' view of what are the key characteristics of a predatory journal, a simple listing of all possible characteristics will not take us much further. It is perhaps then not surprising that their recommendation is for a consensus type meeting where experts could work towards a consensus definition.\nMore to the point: in my view to get to the kind of end goal of a consensus or more widely acceptable definition, would require a more theoretical or at least conceptual framework that is embedded in some of the work on scientific communication and publishing which stipulates what good practices in (journal) publishing are.\nUnfortunately this paper does not help us much on the way to this goal.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3909",
"date": "23 Aug 2018",
"name": "David Moher",
"role": "Author Response F1000Research Advisory Board Member",
"response": "We are sorry to disappoint Johann Mouton in our scoping review. We believe that a scoping review is a reasonable way to attempt to map the literature. Scoping reviews do not typically weight included studies. We believe our review highlights some of the disagreements in the literature about presumed relevant characteristics of presumed predatory journals."
}
]
},
{
"id": "36291",
"date": "13 Aug 2018",
"name": "Joanna Chataway",
"expertise": [
"Reviewer Expertise Science policy"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting and very useful article on a subject which, as the authors note, is widely discussed but under researched.\n\nThe article sets out to examine data derived from a scoping review in an effort to contribute to a definition of the term 'predatory journal'. Disagreements about whether the term should be used at all are summarised early on in the article and this provides a useful backdrop to the multiple difficulties involved in defining the term.\n\nQuestions and issues raised by the article\nIn considering the issue of predatory journals, the authors raise questions of what might be considered characteristics of a legitimate journal might. The article discusses this only partially and mainly in relation to the difficulties of distinguishing between journals which set out to mislead and which abandon the aims of publish high quality science entirely and, on the other hand, those which are poorly managed and run. It is not necessary I think to give this further and detailed consideration here, but it is important to note that there may well other issues to consider here. For example, there may well be complex relationships between the practices of legitimate journals, and the unintended consequences of impact factor metrics (as noted in The Lancet special issue on 'Increasing value, reducing waste' cited by the authors for example) and the expansion of bad as well as good journals and publication platforms which offer alternatives. The Lancet and other critiques point to intense competition involved in publishing in high impact journals, the need to publish for promotion and employment and so on as factors which drive bad practice in general and may also play a role in the rise of predatory journals.\nAnother issue which is only briefly mentioned in the article is whether the norms of publishing and peer review differ across different disciplines. Perhaps give the characteristics of existing literature it is not possible to say much about this currently, but the authors could raise more clearly this as an issue to be considered in future research. And I think the point should be made that whilst it is common for health research articles to follow the reporting convention of 'Introduction, methods, results, discussion', this is not the case in other fields. Thus having this as a criteria for judging the quality of a journal could be misleading.\n\nClarification of terminology\n\nI would encourage the authors to explain terms such as 'epidemiological characteristics' and 'scoping review' which may be familiar to those who work in health research but not perhaps to others.\n\nSome examples?\nSome of the results would have been clearer to me if examples had been included. This is particularly the case with regard to 'persuasive language'. It is unclear to me what is being referred to by that term.\n\nMissing link?\nI couldn't get the link to further details about the search strategy to work. That accounts for the 'partial' score for source data question but that may just be a problem for me and not for others.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3908",
"date": "23 Aug 2018",
"name": "David Moher",
"role": "Author Response F1000Research Advisory Board Member",
"response": "In response to Joanna Chataway’s review: We have made some modifications to the limitations section of our paper. We now state (version 2) “Thirdly, our focus was on the biomedical literature. Whether the publication (e.g., having an IMRAD (Introduction Methods Results And Discussion) and peer review norms we’ve used apply across other disciplines is likely an important topic for further investigation.” We have more cleared described scoping reviews (version 2). We believe we have given some examples in Table 3. For example, in response to the query as to the use and meaning of “persuasive language”, we state (Version 1 and version 2) “Language that targets; Language that attempts to convince the author to do or believe something”. We have fixed the broken link to the full search strategy (version 2)."
}
]
}
] | 1
|
https://f1000research.com/articles/7-1001
|
https://f1000research.com/articles/7-1329/v1
|
22 Aug 18
|
{
"type": "Research Article",
"title": "In vitro study of the efficacy of Solanum nigrum against Leishmania major",
"authors": [
"Christine N. Mutoro",
"Johnson K. Kinyua",
"Joseph K. Ng'ang'a",
"Daniel W. Kariuki",
"Johnstone M. Ingonga",
"Christopher O. Anjili",
"Johnson K. Kinyua",
"Joseph K. Ng'ang'a",
"Daniel W. Kariuki",
"Johnstone M. Ingonga",
"Christopher O. Anjili"
],
"abstract": "Leishmania parasites (Kinetoplastida: Trypanosomatidae) are obligate intracellular parasites of macrophages that causes visceral and cutaneous leishmaniases. Currently, there is inadequate therapeutic interventions to manage this endemic tropical disease, transmitted mainly by phlebotomine sandflies hence there is need to develop affordable and effective therapeutic measures. This study determined the in vitro efficacy of Solanum nigrum methanolic and aqueous plant extracts on Leishmania major parasites. Cytotoxic effects of the extracts were determined using vero cells and reported as percentage viability of the cells. The promastigote parasites of Leishmania major were cultured and grown for 3 days in different concentrations of extracts to determine the MIC and IC50 values. The in vitro antileishmanial efficacy was done on macrophages infected with L. major amastigote parasites and then treated with extracts in varying concentrations. The study revealed that all the test extracts had lower toxicity than control drugs, pentostam (IC50= 0.0 92 mg/ml) and amphotericin B (IC50=0.049 mg/ml). The extracts tended to show a dose dependent cytotoxic effect which corresponded to high vero cells viability as their concentration increased. Methanolic extract of S. nigrum from Kisii seemed to be more efficacious in vitro since it knocked out the promastigotes at a lower MIC level (0.5 mg/ml) when compared to all other extracts whose effective MIC level was ≥ 1 mg/ml. High concentrations of the test extracts and control drugs resulted to low infectivity and multiplication of L. major amastigotes. Findings from this study demonstrate that S. nigrum extracts have potential antileishmanial activities however; further investigation needs to be done on pure compound isolation, in vivo assays and clinical trials so as to use the promising compounds as effective antileishmanial agents.",
"keywords": [
"Leishmaniasis",
"Leishmania major",
"Solanum nigrum",
"Plant extracts",
"Toxicity"
],
"content": "Introduction\n\nLeishmaniasis is a widespread disease caused Leishmania parasites which are transmitted by the sandfly. Desjeux (1998) reported that the disease occurred in different clinical forms, which ranged from cutaneous self-healing lesions to a fatal visceralizing form, and also included the metastasizing muco-cutaneous and post-kala-azar dermal leishmaniasis. Leishmaniasis represents an important health and socioeconomic problem in 88 countries around the world, where this disease is endemic according to a study by Dujardin et al. (2008). Despite the death toll and disease burden of leishmaniasis, there is an acute lack of suitable therapies. Treatment of the disease depends on a limited number of drugs with limitations such as high cost, unacceptable host toxicity, poor efficacy, lack of availability, and acquired parasite drug resistance as reported in studies by Barrett & Fairlamb (1999); Fairlamb (2003) and Stuart et al. (2008). Therefore, the development of cheap, available, effective and less toxic drugs is of paramount importance. Medicinal plants are the best alternative since they possess natural active components that can be effective against parasitic infections or can be used in development of commercial drugs.\n\nSeveral studies have reported that some plants are effective against Leishmania parasites both in vitro and in vivo. The findings of studies by Kinuthia et al. (2014); Ndeti et al. (2016) and Njau et al. (2016) revealed that plants such as Allium sativum, Callistemon citrinus, Moringa stenopetala and Aloe secundiflora were effective against Leishmania major parasites. A study by Wabwoba et al. (2010) indicated that a combination of M. stenopetala with Allium sativum induced apoptotic effect in Leishmania major promastigotes. Similar results were reported in studies by McClure et al. (1996) where the growth of Leishmania mexicana and Leishmania chagasii were inhibited by A. sativum, and by Khademvatan et al. (2011) where A. sativum extract induced apoptosis in Leishmania major parasites. Schlein et al. (2001) reported that Ricinus communis (Malpighiales: Euphorbiaceae) possessed anti-leishmanial effects both in Phlebotomus duboscqi (Diptera: Psychodidae), the vector for L. major in Israel, and in in vivo in BALB/c mice when used alone, as revealed in a study by Oketch et al. (2006), or in combination with Azadiracta indica (Sapindales: Meliaceae) as indicated by Jumba et al. (2015).\n\nThe findings of some studies have shown that Solanaceae family plants have medicinal effects against various parasitic infections. The findings of Laban et al. (2015) showed that Solanum aculeastrum was effective against Leishmania major parasites. Additionally, Mishra et al. (2013) reported that a prenyloxy-naphthoquinone obtained from roots of Plumbago zeylanica (Caryophllales: Plumbaginaceae) has anti-leishmanial activity against Leishmania donovani. There was a significant difference between the EC50 for the isolated compound and miltefosine, the standard drug (P< 0.001) against L. donovani promastigotes and amastigotes. Studies on L. major in Kenya by Makwali et al. (2015) have shown that Plumbago capensis possess anti-leishmanial effects. The current study evaluated the in vitro anti-leishmanial activity of Solanum nigrum extracts on L. major parasites.\n\n\nMethods\n\nThe proposal for this research work was submitted to the KEMRI Scientific Steering Committee (SSC), for approval and was given ethical clearance (Number: KEMRI/SSC-2028) on the use of the mice as the animal model by the Ethical Review Committee (ERC). All experimental animals at the end of the experiment were sacrificed by injection of 100 µl sodium pentobarbital and disposed of according to the regulations of Animal Care and Use (ACUC) through incineration.\n\nThe in vitro studies were carried out using a comparative study design. Pentostam (Glaxo Operations (UK) Limited, Barnard Castle, UK) and amphotericin B (AmBisome®; Gilead, Foster City, CA, USA) were used as the standard drugs to compare their efficacy and toxicity with those of the test extracts. RPMI-1640 and Schneider’s Drosophila media (Thermo Fisher Scientific, Waltham, Massachusetts, USA) were used as the control in in vitro experimental chemotherapeutic studies.\n\nFresh leaves of Solanum nigrum were collected form Kisii and Bungoma, Kenya, where the plant is abundant. The plants were transferred to the Center of Traditional Medicine and Drug Research (CTMDR) at KEMRI (Nairobi, Kenya) and dried at 25°C until they became brittle and attained a constant weight. The dried plants were separately ground using an electric mill (Christy& Norris Ltd., Chelmford, England) into powder followed by extraction using water and analytical grade methanol. The methanolic extracts were prepared as described by Mekonnen et al. (1999) and Cock (2012). Immediately, 100 g of ground plant material was soaked in 500 ml of analytical grade methanol for 72 h at room temperature with gentle shaking. The mixture was filtered using Whatman No.1 filter papers and concentrated using a rotary evaporator to obtain dry methanolic extracts. The extracts were coded as A and B for methanolic extracts of S. nigrum (Bungoma) and S. nigrum (Kisii), respectively. The aqueous extracts were prepared as described by Delahaye et al. (2009). Briefly, 100 g of the dried ground plant material in 600 ml of distilled water was placed in a water bath at 70°C for 1.5 h. The mixture was filtered using Whatman No.1 filter papers and then the filtrate frozen, dried and weighed. The extracts were coded as C and D for S. nigrum (Kisii) and S. nigrum (Bungoma), respectively. The extracts were then stored at 4°C until required for bioassays.\n\nA total of four 8-week-old male inbred BALB/c mice with weights that ranged between 25 and 29 g were obtained from KEMRI. There were eight BALB/c mice per cage in the animal house kept at 23–25°C under a 12/12 h light/dark cycle and were fed on standard diet in the form of mouse pellets and given tap water ad libitum. The mice were handled in accordance with the regulations set by the Animal Care and Use Committee at KEMRI. The mice were used for extraction of peritoneal macrophages that were used for anti-amastigote assay.\n\nThe Leishmania major strain (IDUB/KE/94=NLB-144) which was originally isolated in 1983 from a female Phlebotomus dubosqi collected from Marigat, Baringo County in Kenya were used. The parasites were grown to stationary phase at 25°C in Schneider’s Drosophila medium supplemented with 20% heat-inactivated fetal bovine serum (FBS) (Hyclone® USA), 100 U/ml penicillin and 500 µg/ml streptomycin (Hendricks & Wright, 1979), and 250 µg/ml 5-fluorocytosine arabinoside (Kimber et al., 1981). The stationary-phase metacyclic stage promastigotes were then harvested by centrifugation at 1500g for 15 min at 4°C. The metacyclic promastigotes were then used for the in vitro assays.\n\nStock solutions of the crude plant extracts were made in Schneider’s Drosophila culture medium for anti-leishmanial assays and filtered through 0.22-µm filter flasks in a laminar flow hood (Biological Safety Cabinet). The stock solutions were then stored at 4°C and retrieved later for both in vitro bioassays.\n\nThis assay was used to test the cytotoxicity of individual extracts against Vero cells (Thermo Fischer Scientific, Waltham, Massachusetts, USA) and the results were presented as IC50 values. The assay was carried out as described by Wabwoba et al. (2010). Vero cells were grown in minimum essential medium (MEM) (ATCC® 30-2003™) supplemented with 10% FBS, penicillin (100 IU/ml) and streptomycin (100 µg/ml) in 25 ml cell culture flasks incubated at 37°C in a humidified 5% CO2 for 24 h. The Vero cells were harvested by trypsinization and pooled in 50-ml centrifuge tubes from which 100 µl of cell suspension was moved into two wells of rows A-H in a 96-well flat-bottomed microtiter plate at a concentration of 1×106 cells per ml of the culture medium per well and incubated at 37°C in 5% CO2. The MEM was gently aspirated off and 150 µl of the test extracts (A, B, C and D) were added at concentrations of 1000 µg/ml, 500 µg/ml, 250 µg/ml, 125 µg/ml, 62.5 µg/ml and 31.25 µg/ml in the micro titer plates. The plates containing the Vero cells and test extracts were further incubated at 37°C for 48 h in a humidified 5% CO2 atmosphere. The controls wells comprised of Vero cells and medium while the blank wells had medium alone. A total of 10 µl of (3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) reagent were added into each well and incubated further for 2–4 h until a purple precipitate (formazan) was visible under the microscope. The media together with MTT reagent was gently be aspirated off, after which 100 µl of dimethyl sulfoxide (DMSO) was added, and vigorously shaken for 5 minutes in order to dissolve formazan. The absorbance (optical density) was measured for each well plate using a microtiter plate reader at wavelength of 570 nm. The IC50 values of the extracts were determined automatically using the Chemosen program 2.\n\nThe MICs were determined as described by Wabwoba et al. (2010). Briefly, the L. major metacyclic promastigotes at concentration of 1×106 promastigotes per ml of the culture medium were treated with individual methanolic test extracts A and B whose concentrations were 2000 µg/ml, 1000 µg/ml, 500 µg/ml and 250 µg/ml. These test procedures were repeated for aqueous extracts C and D. The lowest concentration of the individual test extracts in which no live promastigotes were observed was the MIC.\n\nMetacyclic promastigotes at a concentration of 1×106 promastigotes per ml of the culture medium were grown for 48 h in 24-well microtiter plates at 25°C. Aliquots of the promastigotes were transferred into 96 well microtiter plates and incubated further at 27°C for 24 h after which 200 µl of the highest concentrations (2 mg/ml) of the individual test extracts were added before serial dilutions of 2.0×103, 1.0×103, 5.0×102, 2.5×102, 1.25×102, and 6.25×101 were carried out. The control wells contained L. major promastigotes in culture medium alone whereas the blank wells had the culture medium alone. The plates were incubated further at 27°C for 48 h and 10 µl of MTT reagent was into each well and incubated further for 4 h. The medium and MTT reagent were aspirated off the wells. Next, in each well, 100 µl of DMSO was added and the plates shaken for 5 minutes. Absorbances were read at 562 nm using a microtiter reader. The absorbance readings were used to generate IC50 values for the different plant extracts using the Chemosen program v2.\n\nSurvival of L. major promastigotes was stratified as follows: ++++, 75–100% survival compared with control; +++, 50–<75% survival compared with control; ++, 25–<50% survival compared with control; +, <25% survival compared with control; -, absence of live promastigotes.\n\nThe anti-amastigote assay was carried out as described by Delorenzi et al. (2001). The peritoneal macrophages were obtained from four clean BALB/c mice. The mice were anaesthetized using 100 µl pentobarbital sodium (Sagatal®). The body surface of the mouse was disinfected with 70% ethanol after which it was opened dorso-ventrally to expose the peritoneum. A total of 10 µl sterile cold PBS was injected into the peritoneum. After injection, the peritoneum was gently massaged for 2 min to dislodge and release macrophages into the PBS. The peritoneal macrophages were then be harvested by withdrawing the PBS. The PBS containing the macrophages was washed through centrifugation at 2,000g for 10 min and the pellet obtained was re-suspended in RPMI-1640 culture medium. The macrophages were adsorbed in 24-well plates for 4 h at 37°C in 5% CO2. Non-adherent cells were washed away with cold sterile PBS and the adherent macrophages were incubated overnight in RPMI-1640 culture medium. Adherent macrophages were then infected with L. major promastigotes and were further incubated at 37°C in 5% for 4 h, after which they were washed with sterile PBS to remove the free promastigotes, which were not engulfed by the macrophages. This was followed by incubation of the preparation for 24 h in RPMI-1640 culture medium.\n\nPentostam and liposomal amphotericin B at concentrations of 125 µg/ml, 250 µg/ml and 500 µg/ml were used as positive control drugs to compare the parasite inhibition with that by plant extracts. The medium and test extracts or drug was replenished daily for 3 days. After 5 days, the macrophages were washed with sterile PBS at 37°C, fixed in methanol and stained with 10% Giemsa. The number of amastigotes was determined by counting at least 100 macrophages in duplicate cultures, and the count was expressed as infection rate (IR) and multiplication index (MI) as described by Berman & Lee (1984) in the calculations below:\n\nIR (%) = Number of infected macrophages per 100 macrophages\n\nMI(%)=Numberofamastigotesinexperimentalcultureper100macrophagesNumberofamastigotesincontrolcultureper100macrophages×100\n\nThe IC50 values were determined using Chemosen program v2. Data for infection rates and multiplication indices were saved as percentages and then were analyzed using SPSS 13.0 programme. The results were expressed as mean values ± standard deviation (SD). Statistical analysis were done using one way ANOVA and Tukey’s post hoc test and p values < 0.05 were considered significant.\n\n\nResults\n\nGenerally, all the test extracts studied were less toxic (i.e. higher IC50 values) to Vero cells when compared to the control drugs, pentostam (0.03 mg/ml) and amphotericin B (0.01 mg/ml) (Table 1).\n\nThe initial concentration of the test extracts was 1000 µg/ml while that of control drugs was 500 µg/ml before serial dilution.\n\nThe methanolic and aqueous extracts tended to show a dose-dependent cytotoxic effect which corresponded to low IC50 values as their concentration increased. Methanolic extracts of S.nigrum from Bungoma and Kisii had IC50 values of 0.57 mg/ml and 0.50 mg/ml while those their aqueous counterparts were 0.76 mg/ml and 0.64 mg/ml (Table 1).\n\nPreliminary studies involved exposure of the L. major promastigotes to extracts and control drugs at varying concentrations in vitro. The L. major parasites cultured in Schneider’s Drosophila medium were taken as the negative control because the parasites continued to multiply. Both pentostam and amphotericin B used were able to inhibit the Leishmania major promastigotes growth at an MIC of 31.25 µg/ml. The Schneider’s Drosophila medium, on the other hand, led to maximum survival of L. major promastigotes (++++) (Table 2).\n\n++++, 75-100% survival compared with control; +++, 50-<75% survival compared with control; ++, 25-<50% survival compared with control; +, <25% survival compared with control; -, absence of live promastigotes.\n\nBoth aqueous extracts of S. nigrum from Bungoma and Kisii (C and D) inhibited the survival of Leishmania major promastigotes at an MIC of 2000 µg/ml. S. nigrum methanolic extracts from Bungoma (A) lowered L. major parasites at a concentration of 1000 µg/ml, while that from Kisii (B) inhibited L. major multiplication at an MIC of 500 µg/ml.\n\nThe concentrations of the extracts that were effective against L. major promastigotes in vitro were high (>0.5 mg/ml) compared to those of the standard drugs (0.03125 mg/ml). The efficacy of methanolic extracts was better than their respective aqueous counter parts (Table 2).\n\nThe MICs of S. nigrum from Bungoma (A) and S. nigrum from Kisii (B) methanolic extracts were 1000 µg/ml and 500 µg/ml, respectively. Both aqueous extracts of S. nigrum from Kisii (C) and S. nigrum from Bungoma (D) had MIC values of 2000 µg/ml. The standard drugs had lower MICs (31.25 µg/ml) against L. major promastigotes compared to the test extracts. Schneider’s Drosophila medium, which was the negative control, supported maximum survival of Leishmania major promastigotes (Table 3).\n\nThe extracts concentration ranged from 2000 µg/ml to 250 µg/ml for MIC and 62.5 µg/ml for IC50 determination, while the concentration of the positive controls ranged from 125 µg/ml to 15.625 µg/ml for MIC and 500 µg/ml to 15.625 µg/ml for IC50 determination.\n\nIC50 values determined indicated the effectiveness of the test extracts or the controls in inhibiting the promastigotes by 50%. Pentostam and amphotericin B had IC50 values of 0.08 mg/ml and 0.04 mg/ml, respectively, while S. nigrum from Bungoma (A) and S. nigrum from Kisii (B) methanolic extracts had IC50 values of 1.81 mg/ml and 1.28 mg/ml, respectively. Similarly, aqueous extracts of S. nigrum from Kisii (C) and S. nigrum from Bungoma (D) had IC50 values of 1.18 mg/ml and 1.84 mg/ml, respectively. Based on the IC50 values, the aqueous extract of S. nigrum from Kisii (C) was the most effective of the extracts (Table 3).\n\nFor extracts that shared the same MICs, higher IC50 values tended to correspond to increased in vitro survival of promastigotes of Leishmania. The methanolic extract of S. nigrum from Kisii seemed to be more efficacious in vitro since it knocked out the promastigotes at a lower MIC level (0.5 mg/ml) when compared to all other extracts whose effective MIC level was ≥1 mg/ml (Table 3).\n\nRPMI-1640 medium with no drug had an infection rate of 96.7% (Table 4), which implied that it supported maximum growth of Leishmania major amastigotes in peritoneal macrophages (Figure 1). The leishmaniasis drugs pentostam and liposomal amphotericin B inhibited the in vitro survival of L. major amastigotes, corresponding to low infection rates of 26.3% and 21.0%, respectively, at a concentration of 50 µg/ml (Table 4).\n\nAm= amastigote, Nuc= Nucleus, and Cyt= Cytoplasm.\n\nAt a concentration of 125 µg/ml, the methanolic extracts of S. nigrum from Bungoma (A) and S. nigrum from Kisii (B) had infection rates of 71.0±2.3% and 68.0±2.7%, respectively. Similarly, the infection rates of aqueous extracts, S. nigrum from Kisii (C) and S. nigrum from Bungoma (D) were 78.0±2.5% and 85.3±1.2% (Table 4).\n\nThe methanolic extract of S. nigrum from Kisii (B) inhibited the survival of L. major amastigotes better than the other extracts in all the concentrations studied (Figure 2). High concentrations of the test extracts and control drugs resulted in low IRs and MIs of L. major amastigotes. The efficacies were dose-dependent. The difference between the IRs of test extracts and the control drugs were statistically significant (P< 0.05).\n\nWhen the MIs of amastigotes in peritoneal macrophages treated with 125 µg/ml of methanolic test extracts (A and B) were compared with those treated with 50 µg/ml of amphotericin or pentostam, using one-way ANOVA, there was a statistically significant difference (P<0.001). A Tukey post hoc test revealed that the MI of methanolic extracts of S. nigrum from Kisii (A and B) at 500 µg/ml were statistically significant from that of pentostam and amphotericin B (P= 0.001).\n\nWhen the infection rates of methanolic extracts of 500 µg/ml A and B were compared with those of amphotericin B using Tukey’s post hoc test, the difference in each case was statistically significant, (P<0.001 and P=0.001, respectively). Comparisons of the IRs for extracts C and D with those of amphotericin B or pentostam followed a similar trend, where Tukey’s post hoc test indicated a significant difference (P<0.05) for each comparison. The MIs of pentostam and amphotericin B were not statistically different (P≥ 0.05) at a concentration of 25 µg/ml.\n\n\nDiscussion\n\nThis study has shown that S. nigrum has anti-leishmanial activity against Leishmania parasites. The results indicated that the plant extracts of S. nigrum obtained from Kisii and Bungoma have the potential to inhibit L. major promastigotes in vitro. The current study further established that the concentrations of the extracts that were effective (MIC) against L. major promastigotes in vitro were relatively high (>0.5 mg/ml), as compared to those of pentostam and amphotericin B, which both inhibited the promastigotes at 0.03125 mg/ml. The efficacy of methanolic extracts was better than their respective aqueous counterparts. Schneider’s Drosophila medium was used as a negative control and supported maximum survival of the L. major promastigotes in vitro. This was expected because this medium supports the growth of Leishmania promastigotes and amastigotes, as described by Hendricks & Wright, 1979. The efficacy of test extracts was higher than that of Schneider’s Drosophila medium. The slight differences that have been noted between the two allopatric plants could be due to factors such as difference in the presence and composition of the phytochemicals; a study by Aritho et al. (2017) on T. vogelii also revealed such differences.\n\nA study by Son et al. (2003) showed that extracts from S. nigrum leaves had the potential to be used in treatment of tumors, especially liver cancer, and also used for treatment of lung cancer, bladder and gastric carcinoma as indicated by studies done by Mueller et al. (2005) and Ashwani et al. (2012). Additionally, studies by Jain et al. (2011) and Ashwani et al. (2012) revealed that methanol crude extracts obtained from Solanum nigrum possessed antioxidant activity due to its DPPH radical scavenging activity.\n\nStudies by Estevez et al. (2007); Filho et al. (2013); Hubert et al. (2014); and Shen et al. (2012) revealed that some species of Solanum had antileishmanial activity. Findings from the study by Estevez et al. (2007) showed that the extracts of S. stramonifolium had activity against L.amazonensis amastigotes. This activity was attributed to steroid derivatives which include cilistol A and steroidal alkaloids, which form the main components in Solanum species (Abreu Miranda et al., 2013; Filho et al., 2013).\n\nCytotoxic assays using Vero cells showed that the test extracts were less toxic compared to the standard antileishmanial drugs. Generally, the increase in the dose of the extracts led to a higher cytotoxic effect on L. major promastigotes, resulting in inhibition of the growth of the parasites. Many drugs used for treatment of leishmniasis are highly toxic (Santos et al., 2008) and this study confirmed that pentostam and amphotericin B are toxic compared with the extracts tested. The continued use of the contemporary leishmaniasis drugs despite their toxicity is mainly due to lack of an alternative. The use of herbal medicine can be a cheaper and available alternative. The aqueous extracts of both S. nigrum from Bungoma and Kisii (IC50, 0.76 mg/ml and 0.64 mg/ml, respectively) were less toxic than methanolic extracts (IC50 of 0.57 mg/ml and 0.50 mg/ml, respectively).\n\nThe lower the toxicity of the test extracts, the higher the viability of Vero cells after exposure to extracts and vice versa. According to Das et al. (2007) the Solanaceae family plants have been reported to be poisonous both to humans and livestock. Their toxicity has been attributed to the presence of tropane alkaloids, which when ingested in large quantities, causes anticholinergic effects. Another study by Glossman-Mitnik (2007) reported that the toxicity of S. nigrum which is edible is due to solanine, a glycoalkaloid which causes toxicity as the concentration increases.\n\nThe S. nigrum from Bungoma (A) and S. nigrum from Kisii (B) methanolic crude extracts had infection rates of 71.0±2.3% and 68.0±2.6%, respectively, at a concentration of 125 µg/ml. Similarly, the infection rates of aqueous extracts, S. nigrum from Kisii (C) and S. nigrum from Bungoma (D) were 78.0±2.5% and 85.3±1.2%, respectively. In comparison, the leishmaniasis drugs, pentostam and liposomal amphotericin B inhibited the in vitro survival of L. major amastigotes more effectively and this corresponded to low infection rates of 26.3% and 21%, respectively, at a concentration of 50 µg/ml. There was a significant difference between the efficacy of the test extracts and that of the Leishmania drugs (P<0.05). This observation indicates that S. nigrum extracts which are known for their antimicrobial and antifungal potential (Abbas et al., 2014; Musto, 2014). When the test extracts were compared with the controls, IR of macrophages by L. major amastigotes in plain RPMI-1640 medium (negative control) was 96.7±0.9%. This agrees with Berman & Wyler (1980) who observed that the amastigotes of Leishmania tropica and Leishmania donovani in peritoneal macrophages multiplied about three fold in six days when grown in RPMI-1640 medium in absence of antileishmanial agents. The trend was similar for MIs. This observation was similar to that by Wabwoba et al. (2010), who observed that the IRs of amphotericin B and pentostam at 100 µg/ml were 9.0% and 11%, respectively. In this study, however, although the difference between MI for amphotericin B and pentostam at 50 µg/ml was not statistically significant, the in vitro efficacy of amphotericin B in suppressing the amastigotes multiplication was higher than that of pentostam.\n\n\nConclusion\n\nThe findings of this study have justified the claimed medicinal importance of Solanum nigrum as a remedy for various infections. It can be concluded that the crude extracts of S. nigrum possess considerable anti-leishmanial activity, especially against Leishmania major, which were used in this study. The plant may contain potent anti-parasitic compounds, effective in the treatment of Leishmania infections. However, further investigation needs to be conducted on pure compound isolation, toxicological studies and clinical trials so as to use the promising compounds as effective antileishmanial agents.\n\n\nData availability\n\nDataset 1. Raw data for absorbance values from MTT assay and subsequent calculation of IC50 values on Vero cells for extracts of Solanum nigrum from Kisii and controls. For sorted raw absorbance data, columns 3, 6, 9 and 12 contain untreated cells; wells A1, A2, A4, A5, A7, A8, A10, A11, B1, B2, B4, B5, B7, B8, B10 and B11 contain medium only. Rows C-H contain indicated test samples, with extract concentrations of 31.25, 62.5, 125 , 250 µg/ml, 500 and 1000 µg/ml, respectively, and control drug concentrations of 16.125, 31.25, 62.5, 125, 250 and 500 µg/ml, respectively. DOI: https://doi.org/10.5256/f1000research.15826.d214921 (Mutoro et al., 2018).\n\nDataset 2. Raw data for absorbance values from MTT assay and subsequent calculation of IC50 values (on Vero cells) for extracts of Solanum nigrum from Bungoma. For sorted raw absorbance data, columns 3, 6, 9 and 12 contain untreated cells; wells A1, A2, A4, A5, A7, A8, A10, A11, B1, B2, B4, B5, B7, B8, B10 and B11 contain medium only. Rows C-H contain test samples, with extract concentrations of 31.25, 62.5, 125, 250, 500 and 1000 µg/ml, respectively. DOI: https://doi.org/10.5256/f1000research.15826.d214922 (Mutoro et al., 2018).\n\nDataset 3. Raw data for absorbance values from MTT assay and subsequent calculation of IC50 values (on promastigotes) for extracts of Solanum nigrum from Kisii and controls. For sorted raw absorbance data, columns 3, 6, 9 and 12 contain untreated cells; wells A1, A2, A4, A5, A7, A8, A10, A11, B1, B2, B4, B5, B7, B8, B10 and B11 contain medium only. Rows C-H contain test samples, with extract concentrations of 62.5, 125, 250, 500, 1000 and 2000 µg/ml, respectively C to H for extract samples and 16.125, 31.25, 62.5, 125, 250 and 500 µg/ml, respectively, for standard drugs. DOI: https://doi.org/10.5256/f1000research.15826.d214923 (Mutoro et al., 2018).\n\nDataset 4. Raw data for absorbance values from MTT assay and subsequent calculation of IC50 values (on promastigotes) for extracts of Solanum nigrum from Bungoma. For sorted raw absorbance data, columns 3, 6, 9 and 12 contain untreated cells; wells A1, A2, A4, A5, A7, A8, A10, A11, B1, B2, B4, B5, B7, B8, B10 and B11 contain medium only. Rows C-H contain test samples, with extract concentrations of 62.5, 125, 250, 500, 1000 and 2000 µg/ml, respectively. DOI: https://doi.org/10.5256/f1000research.15826.d214924 (Mutoro et al., 2018).\n\nDataset 5. Anti-amastigote (macrophage) assays. DOI: https://doi.org/10.5256/f1000research.15826.d214929 (Mutoro et al., 2018).",
"appendix": "Grant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nAbbas K, Niaz U, Hussain T, et al.: Antimicrobial activity of fruits of Solanum nigrum and Solanum xanthocarpum. Acta Pol Pharm. 2014; 71(3): 415–421. PubMed Abstract\n\nAbreu Miranda M, Tiossi RF, da silva MR, et al.: In vitro leishmanicidal and cytotoxic activities of the glycoalkaloids from Solanum lycocarpum (Solanaceae) fruits. Chem Biodivers. 2013; 10(4): 642–648. PubMed Abstract | Publisher Full Text\n\nAshwani K, Sagwal S, Rani S: An updated review of molecular genetics, phytochemistry and pharmacology of black nightshades (Solanum nigrum). Int J Pharm Res Sci. 2012; 3(9): 2956–2977. Publisher Full Text\n\nBarrett MP, Fairlamb AH: The biochemical basis of arsenical-diamidine crossresistance in African trypanosomes. Parasitol Today. 1999; 15(4): 136–140. PubMed Abstract | Publisher Full Text\n\nBerman JD, Lee LS: Activity of antileishmanial agents against amastigotes in human monocyte-derived macrophages and in mouse peritoneal macrophages. J Parasitol. 1984; 70(2): 220–225. PubMed Abstract | Publisher Full Text\n\nBerman JD, Wayler DJ: An in vitro model for investigation of chemotherapeutic agents in leishmaniasis. J Infect Dis. 1980; 142(1): 83–86. PubMed Abstract | Publisher Full Text\n\nCock IE: Antimicrobial activity of Callistemon citrinus and Callistemon salignus methanolic extracts. Pharmacognosy Communications. 2012; 2(3): 50–57. Publisher Full Text\n\nDas JKL, Prasad SR, Mitra SK: Evaluation of Liv.52 DS tablet as hepatoprotective agent in prophy with statin therapy. Medical Update. 2007; 15: 31–36. Reference Source\n\nDelahaye C, Rainford L, Nicholson A, et al.: Antibacterial and antifungal analysis of crude extracts from leaves of Callistemon viminalis. Journal of Medical and Biological Sciences. 2009; 3(1): ISSN 1934-7189. Reference Source\n\nDelorenzi JC, Attias M, Gattas CR, et al.: Antileishmanial activity of an indole alkaloid from Peschiera australis. Antimicrob Agents Chemother. 2001; 45(5): 1349–1354. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDesjeux P: Leishmania and HIV in Gridlock. Document WHO/CTD/LEISH/98.23. Geneva: World Health Organization. 1998. Reference Source\n\nDujardin JC, Campino L, Cañavate C, et al.: Spread of vector-borne diseases and neglect of Leishmaniasis, Europe. Emerg Infect Dis. 2008; 14(7): 1013–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEstevez Y, Castillo D, Pisango MT, et al.: Evaluation of the leishmanicidal activity of plants used by Peruvian Chayahuita ethnic group. J Ethnopharmacol. 2007; 114(2): 254–259. PubMed Abstract | Publisher Full Text\n\nFairlamb AH: Chemotherapy of human African trypanosomiasis: current and future prospects. Trends Parasitol. 2003; 19(11): 488–94. PubMed Abstract | Publisher Full Text\n\nFilho VC, Meyre-Silva C, Niero R, et al.: Evaluation of antileishmanial activity of selected brazilian plants and identification of the active principles. Evid Based Complement Alternat Med. 2013; 2013: 265025. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGlossman-Mitnik D: CHIH-DFT Determination of the molecular structure and infrared and ultraviolet spectra of gamma-solanine. Spectrochim Acta A Mol Biomol Spectrosc. 2007; 66(1): 208–211. PubMed Abstract | Publisher Full Text\n\nHendricks L, Wright N: Diagnosis of cutaneous leishmaniasis by in vitro cultivation of saline aspirates in Schneider's Drosophila Medium. Am J Trop Med Hyg. 1979; 28(6): 962–964. PubMed Abstract | Publisher Full Text\n\nHubert DJ, Céline N, Michel N, et al.: In vitro leishmanicidal activity of some Cameroonian medicinal plants. Exp Parasitol. 2013; 134(3): 304–308. PubMed Abstract | Publisher Full Text\n\nJain R, Sharma A, Gupta S, et al.: Solanum nigrum: current perspectives on therapeutic properties. Altern Med Rev. 2011; 16(1): 78–85. PubMed Abstract\n\nJumba BN, Anjili CO, Makwali J, et al.: Evaluation of leishmanicidal activity and cytotoxicity of Ricinus communis and Azadirachta indica extracts from western Kenya: in vitro and in vivo assays. BMC Res Notes. 2015; 8: 650. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhademvatan S, Gharavi MJ, Rahim F, et al.: Miltefosine-induced apoptotic cell death on Leishmania major and L. tropica strains. Korean J Parasitol. 2011; 49(1): 17–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKimber C, Evans D, Robinson B, et al.: Control of yeast contamination with 5-fluorocytosine in the in vitro cultivation of Leishmania spp. Ann Trop Med Parasitol. 1981; 75(4): 453–454. PubMed Abstract | Publisher Full Text\n\nKinuthia GK, Kabiru EW, Gikonyo NK, et al.: In vitro Activity of aqueous and methanol extracts of Callistemon citrinus (Family Myrtaceae) against Leishmania major. Afr J Health Sci. 2014; 27(2): 118–133. Reference Source\n\nLaban LT, Anjili CO, Mutiso JM, et al.: Experimental therapeutic studies of Solanum aculeastrum Dunal. on Leishmania major infection in BALB/c mice. Iran J Basic Med Sci. 2015; 18(1): 64–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMakwali JA, Wanjala FME, Ingonga J, et al.: In vitro Studies on the antileishmanial activity of herbicides and plant extracts against Leishmania major parasites. Res J Med Plants. 2015; 9(3): 90–104. Publisher Full Text\n\nMcClure CD, Noland LL, Zatyrka SA: Antileishmanial properties of Allium sativum extracts and derivatives. Acta Horticulature. 1996; 426: 183–191. 14. Publisher Full Text\n\nMekonnen Y, Yardley V, Rock P, et al.: In vitro antitrypanosomal activity of Moringa stenopetala leaves and roots. Phytother Res. 1999; 13(6): 538–539. PubMed Abstract | Publisher Full Text\n\nMishra BB, Gour JK, Kishore N, et al.: An antileishmanial prenyloxy-naphthoquinone from roots of Plumbago zeylanica. Nat Prod Res. 2013; 27(4–5): 480–485. PubMed Abstract | Publisher Full Text\n\nMueller LA, Solo TH, Taylor N, et al.: The SOL Genomics Network: a comparative resource for Solanaceae biology and beyond. Plant Physiol. 2005; 138(3): 1310–1317. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMusto M, Potenza G, Cellini F: Inhibition of Penicillium digitatum by a crude extract from Solanum nigrum leaves. Biotechnol Agron Soc Environ. 2014; 18(2): 174–180. Reference Source\n\nMutoro CN, Kinyua JK, Ng'ang'a JK, et al.: Dataset 1 in: In vitro study of the efficacy of Solanum nigrum against Leishmania major. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15826.d214921\n\nMutoro CN, Kinyua JK, Ng'ang'a JK, et al.: Dataset 2 in: In vitro study of the efficacy of Solanum nigrum against Leishmania major. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15826.d214922\n\nMutoro CN, Kinyua JK, Ng'ang'a JK, et al.: Dataset 3 in: In vitro study of the efficacy of Solanum nigrum against Leishmania major. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15826.d214923\n\nMutoro CN, Kinyua JK, Ng'ang'a JK, et al.: Dataset 4 in: In vitro study of the efficacy of Solanum nigrum against Leishmania major. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15826.d214924\n\nMutoro CN, Kinyua JK, Ng'ang'a JK, et al.: Dataset 5 in: In vitro study of the efficacy of Solanum nigrum against Leishmania major. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15826.d214929\n\nNdeti CM, Kituyi C, Ndirangu M, et al.: Efficacy of combination therapy using extracts of Aloe secundiflora Eng l. and Callistemon citrinus William C. in Leishmania major infected BALB/c mice. East Afr Med J. 2016; 93(3): 127–134. Reference Source\n\nNjau VN, Maina ENM, Anjili CO, et al.: Phytochemical analysis of Carissa edulis against Leishmania major. African Journal of Pharmacology and Therapeutics. 2016; 5(4): 253–262. Reference Source\n\nOketch GBA, Irungu LW, Anjili CO, et al.: In vitro activity of the total aqueous ethanol leaf extracts of Ricinus communis on Leishmania major promastigotes. Kenya Journal of Sciences, Series A& B. Special Edition. 2006; 1(1): 1–4. Reference Source\n\nSantos DO, Coutinho CE, Madeira MF, et al.: Leishmaniasis treatment--a challenge that remains: a review. Parasitol Res. 2008; 103(1): 1–10. PubMed Abstract | Publisher Full Text\n\nSchlein Y, Jacobson RL, Müller GC: Sand fly feeding on noxious plants: a potential method for the control of leishmaniasis. Am J Trop Med Hyg. 2001; 65(4): 300–303. PubMed Abstract | Publisher Full Text\n\nShen T, Li GH, Wang XN, et al.: The genus Commiphora: a review of its traditional uses, phytochemistry and pharmacology. J Ethnopharmacol. 2012; 142(2): 319–330. PubMed Abstract | Publisher Full Text\n\nSon YO, Kim J, Lim JC, et al.: Ripe fruit of Solanum nigrum L. inhibits cell growth and induces apoptosis in MCF-7 cells. Food Chem Toxicol. 2003; 41(10): 1421–1428. PubMed Abstract | Publisher Full Text\n\nStuart K, Brun R, Croft S, et al.: Kinetoplastids: related protozoan pathogens, different diseases. J Clin Invest. 2008; 118(4): 1301–1310. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWabwoba BW, Anjili CO, Ngeiywa MM, et al.: Experimental chemotherapy with Allium sativum (Liliaceae) methanolic extract in rodents infected with Leishmania major and Leishmania donovani. J Vector Borne Dis. 2010; 47(3): 160–167. PubMed Abstract"
}
|
[
{
"id": "37485",
"date": "31 Aug 2018",
"name": "Sichangi Kasili",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAbstract\nGrammar: “Parasites cause” not “parasites causes”. There are currently inadequate therapeutic interventions to manage this endemic tropical disease, transmitted mainly by phlebotomine sandflies hence there is need to develop affordable and effective therapeutic measures. The current study sought to determine the in vitro efficacy of Solanum nigrum methanolic and aqueous plant extracts on Leishmania major parasites. When making reference to numbers use digit if more than ten i.e. “three” and not “3”. There should be an indication of the country where the study was contacted, how the plants for plant extracts were collected and the animal models used. There is no indication of how the results of the study were analyzed statistically. Methanolic extract of S. nigrum from Kisii seemed to be more efficacious in vitro since it knocked out the promastigotes at a lower MIC level (0.5 mg/ml) when compared to all other extracts whose effective MIC level was ≥ 1 mg/ml. Last sentence needs rephrasing.\n\nIntroduction\nGrammar, especially some words lacking in connectives i.e. in first line the word “by” is missing between “caused” and “Leishmania”. Leishmaniasis represents an important health and socioeconomic problem in 88 countries around the world, where this disease is endemic according to a study by (Dujardin et al. 20081). There are long sentences which need to be broken to make sense. There is need for the authors to demonstrate the need for investigating the plants they used given that a number of studies show promising results. What was different with the plant?\n\nMethods\nThe mice were handled in accordance with the regulations set by the Animal Care and Use Committee at KEMRI. Statistical analyses should be specifically in respect of the assays conducted rather than general presentation. How were survival rates of L. major promatigotes analyzed? Why did the authors not use same concentrations of the test extract and control drugs then increase concentrations of test extracts if required? How was Tukeys test used? Scientific names should be italicized in this section and the rest of the manuscript. Remove repetitions of the methods in this section. Titles of tables should not be combined with what should be footnotes i.e. table 1 and 3. Rework on table 2 for better presentation Avoid use of abbreviation IRs, not standard. Figure 1 not necessary.\n\nDiscussion\nAvoid direct repetition of results in discussion i.e. most of paragraph 1. Authors should insert the explanation for why there are differences in extracts after the relevant sentence. What is the relevance of the second paragraph to the current study? Discuss each section of the results separately and thoroughly by comparing with other studies, including own opinions and explanations.\n\nReferences\nAre the phrases in red font after each reference a journal requirement? If not delete. Datasets sites should be removed from reference section.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "39674",
"date": "05 Nov 2018",
"name": "Sarman Singh",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nComments:\n\nAbstract: Abstract should be rewritten even the IC 50 of standard drugs are not matching with the result section of the manuscript.\nKeywords: Leishmania major and Solanum nigrum should be in italics throughout the manuscript; eg: Leishmania major and Solanum nigrum\nMethods: No subheading or paragraph describes the maintenance of cell lines which is being used for cytotoxicity. This is important information which is missing. Please add these details.\nMice and Leishmania parasites: We wonder that the parasites are being cultured in Drosophila medium instead of M199 or RPMI. Drosophila medium is often used to achieve the fast primary isolation rather than maintenance. The M199 and RPMI media are far better for the maintenance of the parasites.\nPreparation of the stock solution of test extracts: The stock solution of the extracts in the culture medium should be prepared freshly immediate before the use. It should not be stored for later use as it may change the pH value of the medium which can hamper the activity of the extracts. It also can lead to the high rate of contamination. Furthermore, if at all, it should be stored only in the solvent not in the medium. cytotoxicity assay using Vero cells to determine IC 50: This paragraph needs to be rewritten. There are no details of counting of the cells.\nIn the beginning of the manuscript, the authors mention volume of the suspension of cells from which 100ul of cells were directly used and finally declared it as 1x10 6 cells per ml. To our minds the centrifugation and counting of the cells with its procedure is extremely important to claim any activity of anti leishmanial compounds. Otherwise results are difficult to be reproduced.\nIt is also not clear, how the authors could test 6 dilutions of 4 extracts?. As per the description of plate designing the cells were plated only in two wells of A-H row? (as mentioned in line no 11 in the above subheading). This totals to only 16. It is serious concern for this reviewer.\nAgain authors have not mentioned the time duration of incubation before aspirating out the MEM medium. It is the required time for the adhering to the solid phase. Otherwise cell will also be aspirated off with the medium, and making the study claims untenable.\nAgain the same section authors have no-where mentioned the standard drugs and concentrations thereof, if at all used for the assay. But in the result section the results of the standard drugs are mentioned on the top. How come?\nAuthors should also mention the exact timing of incubation performed by them after MTT addition , not 2-4 hrs, which is very arbitrary and not acceptable.\nEvaluation of MIC & Evaluation of IC 50 and anti-promastigote assay.\nA big question is, if the authors performed two different tests to evaluate MIC and IC 50 value individually? If not then the heading can be merged as Antipromastigote assay to evaluate the IC 50 and MIC of the Test extracts. Evaluation of IC 50 and antipromastigote assay.\nWhat does authors mean by serial dilutions of 2.0×10 3 , 1.0×10 3 , 5.0×10 2 , 2.5×10 2,1.25×10 2, and 6.25×10 1 ? The dilutions are written in different manner, either as log or serial, but not like this. if these are extract’s dilutions than there should be uniformity in the unit representation of the concentration of the drugs for all assays.\nSomewhere it is mg/ml and somewhere its ug/ml for the same drugs.\nAgain there is no discussion of standard drugs under this subheading.\nAuthors should mention method of visual reading by microscopic observation (if done) before mentioning the survival comparison.\nAnti-amastigote assay: The amount of sterile cold PBS injected in the peritoneum seems very low in volume.10 ul will go no where and no one can withdraw it back. write carefully.\nWhat was the ratio of macrophage and promastigotes used for this assay. No counting and ratio is mentioned.\nWere 4 hrs sufficient for the infection incubation as the promastigotes takes at least 8-12 hrs for engulfment to Maximum 24 hrs. in the manuscript again the plate was incubated for 24 hrs but after the washing step. After reading their protocol, we really cannot believe the results, if these are genuine. And what was the need of second incubation. In this section the concentration of plant extracts used are missing.\nAuthors must have mentioned the magnification (x) used for the microscopic observation.\nAmastigotes are only possible to visualize at 1000 x magnification with oil which is not possible in 24 well plate directly. We can’t understand how authors could count the amastigotes.\nStatistical analysis: All the experiments could have been performed in duplicate wells in the same assay. And also could have been repeated thrice for mean and SD Value, which is not done in cytotoxic and antipromastigote assays mentioned in this manuscript. Culture was done in duplicate only in anti-amastigote assay, for which p value can be calculated by simple way. ANOVA and Tukey’s Test is not needed.\nResults: As there are so many flaws in methodology section there is no meaning of evaluating-the results. At every step the results must match with the methodology used. This is missing in this otherwise poorly written paper. Even the concentration unit is not uniform. The IC 50 value of standard drugs are not even appropriate .\nFigure 1 is not even clear to visualize the macrophages. How come author counted the amastigotes?\nRecommendation: The authors are encouraged to go through a recent publication by Srivastava et al, 20171 and resubmit the manuscript as fresh.\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1329
|
https://f1000research.com/articles/7-1327/v1
|
21 Aug 18
|
{
"type": "Opinion Article",
"title": "Silicon Valley new focus on brain computer interface: hype or hope for new applications?",
"authors": [
"Stefan Mitrasinovic",
"Alexander P.Y. Brown",
"Andreas T. Schaefer",
"Steven D. Chang",
"Geoff Appelboom",
"Alexander P.Y. Brown",
"Andreas T. Schaefer",
"Steven D. Chang",
"Geoff Appelboom"
],
"abstract": "In the last year there has been increasing interest and investment into developing devices to interact with the central nervous system, in particular developing a robust brain-computer interface (BCI). In this article, we review the most recent research advances and the current host of engineering and neurological challenges that must be overcome for clinical application. In particular, space limitations, isolation of targeted structures, replacement of probes following failure, delivery of nanomaterials and processing and understanding recorded data. Neural engineering has developed greatly over the past half-century, which has allowed for the development of better neural recording techniques and clinical translation of neural interfaces. Implementation of general purpose BCIs face a number of constraints arising from engineering, computational, ethical and neuroscientific factors that still have to be addressed. Electronics have become orders of magnitude smaller and computationally faster than neurons, however there is much work to be done in decoding the neural circuits. New interest and funding from the non-medical community may be a welcome catalyst for focused research and development; playing an important role in future advancements in the neuroscience community.",
"keywords": [
"Brain computer interface",
"brain machine interface",
"neuralace"
],
"content": "Abbreviations and Acronyms\n\n3D, Three-dimensional\n\nBCI, Brain-Computer Interface\n\nBMI, Brain-Machine Interface\n\nCNS, Central Nervous System\n\nCPU, Central Processing Unit\n\nDPU, Decoding Processing Unit\n\nECoG, Electrocorticography\n\nEEG, Electroencephalogram\n\nfMRI, Functional Magnetic Resonance Imaging\n\nPET, Positron Emission Tomography\n\nPNS, Peripheral Nervous System\n\nTPU, Tensor Processing Unit\n\n\nIntroduction\n\nIn the last year, there has been an explosion of interest by entrepreneurs looking to become actively involved in developing devices to interact with the central nervous system. These have included the likes of Elon Musk (Neuralink Inc. California USA), Mark Zuckerberg (Facebook Inc. California, USA), Bryan Johnson (Kernel. California, USA) as well as dedicated startups such as Paradromics (San Jose, California, USA) or Cortera (Berkeley, California, USA), and even DARPA (Defense Advanced Research Projects Agency. Virginia, USA), spurred on in part by the BRAIN initiative1. Each of these individuals and their respective companies share a particular focus in developing a robust brain-computer interface (BCI). We define BCI, for the purposes of this discussion, as a technological system designed to provide a stable mapping and modulation of activity within neural networks of the central nervous system. Therefore, at the very minimum, a working BCI will require both a physical interface to the brain (brain-machine interface; BMI) and computer systems that can process high bandwidth signals in real-time.\n\nIt is important to distinguish that there are very different engineering and neurological challenges between building BCIs for the peripheral nervous system (PNS) and central nervous system (CNS). In particular, space limitations for processing units, isolation of targeted structures, replacement of probes following failure, and delivery of nanomaterials in vivo2,3; for the purpose of this commentary we will focus on the CNS as this is an area of particular interest by the entrepreneurs highlighted above.\n\nUnderstanding the information transfer and processing of the nervous system is one of the most urgent challenges faced by the biomedical community, with a plethora of academic and clinical applications, including better understanding of aging, neurodegenerative diseases and interfaces for prosthetics and implants. For example, recent advances in chronic neural recording devices have facilitated the willful control of robotic prosthetic limbs for the treatment of paralysis4 and improved seizure prevention with chronic telemetry in refractory epilepsy5,6. There are many different kinds of potential BCIs that will each serve independent functions, however all systems must tackle three fundamental problems: how to accurately record information from relevant neural systems, how to decode such information, and how to stimulate and manipulate neuronal dynamics in an appropriate and meaningful way.\n\n\nNeural engineering progress\n\nThe origins of neural engineering stretch back to early attempts to record activity chronically in the 1950s when electrodes were implanted into the cortex of rhesus monkeys to measure electrical activity in the central nervous system7,8. Great innovations have been made in neural recording techniques, which have allowed the number of simultaneously recorded neurons to double approximately every 7 years9, mimicking Moore’s law albeit at a much reduced rate10. Early clinical applications of BMIs centered on the restoration of perceptions to patients with sensory deficits. One of the pioneering studies was the work on potential cochlear implants in the 1970s that eventually reached life-changing reality in the 1980s for patients11–13.\n\nIn parallel to the development of the cochlear implant, researchers worked with the CNS by applying electrical current to the visual cortex of blind patients through grids of surface electrodes implanted over the visual cortex, thus developing visual prostheses14,15. These systems allowed blind subjects to learn to recognize simple visual objects16. Neural engineering continued to improve with multi-channel neuronal recordings allowing owl monkeys17 and later humans4 to control two- and three-dimensional movements of a robot arm with multiple degree of freedom. Neuro-prosthetic research has undoubtedly benefited from these advances, but additional design parameters need to be included for effective long-term operation and clinical translation of neural interfaces.\n\nWhile research in neural engineering has been steadily improving the bandwidth of BCI interfaces, the pace of this exponential increase falls far short of that seen in the silicon chip industry9. At current pace, the goal set by DARPA of recording from 106 neurons simultaneously would not be expected to be reached for around 80–100 years. Increasing interest and funding from members of Silicon Valley may prove to be a useful catalyst for the field and promote investigation of new applications of BCIs. For example, Facebook Inc. is investigating methods of non-verbal communication that will not require the virtual keyboards that are currently being used by patients with BrainGate18.\n\n\nChallenges\n\nDespite advances in recent years, implementation of general purpose BCIs faces a number of constraints arising from engineering, computational, ethical, and neuroscientific factors. The future success of BCI is often imagined as a function of the capability to produce multi-electrode arrays with a greater and greater density of recording sites. Here, we outline several other challenges that must be overcome in parallel if BCI is to become of more than limited interest.\n\nPerhaps the most immediate barrier to wider usage of BCI systems is the difficulty in implanting them. Non-invasive modalities, such as electroencephalogram (EEG) but also positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) lack the spatial resolution to record detailed activity at the level of the neuronal circuit, and so can only be used for very simple low bandwidth (typically binary choice) interfaces. There is no technology currently available that can record an action potential without the need for major surgery, although research into less invasive endovascular electrodes19 and surface electrocorticogram (ECoG) devices is ongoing20. Furthermore, the quality of recordings obtained via implantable electrodes degrades over time due to a combination of gliosis21,22, neuronal depletion23,24, and degradation of the system itself22,25,26. This tends to limit recording times to a period of months or a few years at most, although the use of compliant materials27 or soft ultra-thin wires28,29 designed to reduce mechanical shear has shown promise in reducing these effects.\n\nBy definition, detecting neuronal signals constitutes only one half of the BCI. These signals must then be able to be communicated to a computer via either a wired or wireless connection. This poses further challenges, necessitating a tunneled wire through the cranium. Wireless systems avoid this challenge, but create a host of new problems in turn including available bandwidth, safety, and the need for an implantable battery – which may last only a few months powering a large BCI system30,31. To give a sense of the challenge here, we calculate that a 100,000-electrode system would require a communication protocol at least as fast as a ThunderboltTM 3 connection (Apple, Inc. & Intel, Inc.), currently the fastest available consumer-level wired standard. The required bandwidth could be reduced drastically by on-chip processing, reducing the dimensionality of the data, but this in turn requires vastly more complex devices, limiting the number of electrodes per device, and greatly increasing its volume – a critical flaw in any proposed intracranial device. Furthermore, onboard processing of any kind poses serious and mostly unexplored challenges in terms of the energy dissipation required to maintain the device at body temperature so as not to cause thermal damage to the brain.\n\nCurrent multi-electrode array systems offer up to around one thousand recording channels32, in turn providing monitoring for hundreds of neurons from a single area33, sufficient for the control of several univariate parameters. More general purpose BCI will require the sampling of tens if not hundreds of thousands of units, potentially from multiple cortical regions. This poses engineering and surgical challenges far beyond what is currently achievable.\n\nComputational and data analysis challenges arise from the highly parallel nature of multiunit recordings. In general, there are four steps utilized to decode neural activity. Firstly, the signal must be filtered to remove extraneous noise. Secondly, spikes must be detected. Thirdly, these spikes must be ‘sorted’, typically by waveform, in order to be assigned to ‘units’ – putative single neurons. Lastly, the inferred population spike train must be decoded in order to provide a control signal. Whilst the first and second of these steps are essentially solved, for sufficiently high signal-to-noise systems, spike sorting is still an area of active research34,35, with no clear optimal solution, and often relies on semi-automated systems that require a great deal of human input to fine tune. Spike sorting may not be strictly necessary for the training of accurate decoders, as the raw spatiotemporal pattern of activity may suffice, but this may in turn reduce the dimensionality of the data.\n\nReal-time processing of highly parallel recording systems remains a key challenge in the field. Promising technologies include a move away from general-purpose central processing units (CPUs) to application specific integrated circuits designed to perform a limited number of operations, such as Google’s tensor processing unit (TPU) or the graphical processing chips found in most computers. It is not unreasonable to suspect that the solution to decoding neural activity may lie in dedicated ‘decoding processing units’ (DPUs).\n\nThe physical scalability of BCI systems also poses a profound challenge. The brain is a three-dimensional (3D) structure. Unlike silicon wafers, manufacturing devices with a complex 3D structure and including integrated electronics poses a particular problem. Furthermore, current designs of multi-electrode arrays are typically not well suited to rapid scalability, requiring extensive redesign for each generation of device.\n\nEven if this problem can be overcome, it may seem intuitive that more units result in greater bandwidth, however, the distributed nature of cortical processing has actually shown to result in a decreasing marginal value of each additional unit in terms of information retrieval36. Therefore, the common mantra that more units results in more information does not follow, at least not proportionally. We simply do not understand well enough the nature of distributed information representation and processing in the neocortex to be able to make more than a rudimentary estimate of what a particular sequence of activity might ‘mean’.\n\n\nConclusion\n\nThe literature has shown large decades of neuroscience research efforts in developing tools to probe the signaling complexity of the nervous system, with several clinical applications being developed. Although orders of magnitude smaller and computationally faster than neurons, our electronics cannot mimic the complexity of neural systems. Current understanding of the function of neural circuits could be compared to trying to understand the internet by means of a few dozen well-placed potentiometers in the data centers of service providers. This is not to disparage the efforts of neuroscientists, far from it, but rather to underscore that decoding neural circuits ranks among the deepest and most complex contemporary endeavors, and it will not be solved overnight by Silicon Valley enthusiasm and zeal alone. However, we consider that many of the engineering challenges outlines above are amenable to focused research and development, particularly those surrounding miniaturization and parallelization of recording systems. We support the interest of entrepreneurs in placing their focus on the neuroscience community, and we look forward to the future advancements that will undoubtedly be realized in the coming years.\n\n\nData availability\n\nNo data are associated with this article",
"appendix": "Competing interests\n\n\n\nThis article is the sole work of its authors. ATS is a co-founder of and holds shares in Paradromics Inc. a company developing scalable electrophysiology; patent applications 14/937,740 and 15/259,435, co-filed by ATS refer to technology related to BCI / BMI; there is no other potential conflict of interest.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nThe White House: Fact Sheet: BRAIN Initiative. ObamawhitehouseArchivesGov. 2013. Reference Source\n\nChen Y, Liu L: Modern methods for delivery of drugs across the blood-brain barrier. Adv Drug Deliv Rev. 2012; 64(7): 640–65. PubMed Abstract | Publisher Full Text\n\nChen R, Canales A, Anikeeva P: Neural recording and modulation technologies. Nature Publishing Group. 2017; 2: 16093. Publisher Full Text\n\nHochberg LR, Bacher D, Jarosiewicz B, et al.: Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 2012; 485(7398): 372–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCook MJ, O'Brien TJ, Berkovic SF, et al.: Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study. Lancet Neurol. 2013; 12(6): 563–71. PubMed Abstract | Publisher Full Text\n\nMorrell MJ, RNS System in Epilepsy Study Group: Responsive cortical stimulation for the treatment of medically intractable partial epilepsy. Neurology. 2011; 77(13): 1295–304. PubMed Abstract | Publisher Full Text\n\nLilly JC: Electrode and cannulae implantation in the brain by a simple percutaneous method. Science. 1958; 127(3307): 1181–2. PubMed Abstract | Publisher Full Text\n\nStrumwasser F: Long-term recording' from single neurons in brain of unrestrained mammals. Science. 1958; 127(3296): 469–70. PubMed Abstract | Publisher Full Text\n\nStevenson IH, Kording KP: How advances in neural recording affect data analysis. Nat Neurosci. 2011; 14(2): 139–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoore GE: Cramming more components onto integrated circuits. Electronics. 1965; 114–7. Reference Source\n\nWilson BS, Dorman MF: Cochlear implants: a remarkable past and a brilliant future. Hear Res. 2008; 242(1–2): 3–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEddington DK: Speech recognition in deaf subjects with multichannel intracochlear electrodes. Ann N Y Acad Sci. 1983; 405(1): 241–58. PubMed Abstract | Publisher Full Text\n\nHouse WF: Cochlear implants. Ann Otol Rhinol Laryngol. 1976; 85 suppl 27(3Pt2): 1–93. PubMed Abstract\n\nDobelle WH, Mladejovsky MG: Phosphenes produced by electrical stimulation of human occipital cortex, and their application to the development of a prosthesis for the blind. J Physiol. 1974; 243(2): 553–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrindley GS: Sensations produced by electrical stimulation of the occipital poles of the cerebral hemispheres, and their use in constructing visual prostheses. Ann R Coll Surg Engl. 1970; 47(2): 106–8. PubMed Abstract | Free Full Text\n\nDobelle WH, Quest DO, Antunes JL, et al.: Artificial vision for the blind by electrical stimulation of the visual cortex. Neurosurgery. 1979; 5(4): 521–7. PubMed Abstract | Publisher Full Text\n\nWessberg J, Stambaugh CR, Kralik JD, et al.: Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature. 2000; 408(6810): 361–5. PubMed Abstract | Publisher Full Text\n\nPandarinath C, Nuyujukian P, Blabe CH, et al.: High performance communication by people with paralysis using an intracortical brain-computer interface. eLife. 2017; 6: pii: e18554. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOxley TJ, Opie NL, John SE, et al.: Minimally invasive endovascular stent-electrode array for high-fidelity, chronic recordings of cortical neural activity. Nat Biotechnol. 2016; 34(3): 320–7. PubMed Abstract | Publisher Full Text\n\nKhodagholy D, Gelinas JN, Zhao Z, et al.: Organic electronics for high-resolution electrocorticography of the human brain. Sci Adv. 2016; 2(11): e1601027. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKozai TD, Jaquins-Gerstl AS, Vazquez AL, et al.: Brain tissue responses to neural implants impact signal sensitivity and intervention strategies. ACS Chem Neurosci. 2015; 6(1): 48–67. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPolikov VS, Tresco PA, Reichert WM: Response of brain tissue to chronically implanted neural electrodes. J Neurosci Methods. 2005; 148(1): 1–18. PubMed Abstract | Publisher Full Text\n\nSzarowski DH, Andersen MD, Retterer S, et al.: Brain responses to micro-machined silicon devices. Brain Res. 2003; 983(1–2): 23–35. PubMed Abstract | Publisher Full Text\n\nKotzar G, Freas M, Abel P, et al.: Evaluation of MEMS materials of construction for implantable medical devices. Biomaterials. 2002; 23(13): 2737–50. PubMed Abstract | Publisher Full Text\n\nWard MP, Rajdev P, Ellison C, et al.: Toward a comparison of microelectrodes for acute and chronic recordings. Brain Res. 2009; 1282: 183–200. PubMed Abstract | Publisher Full Text\n\nBarrese JC, Rao N, Paroo K, et al.: Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates. J Neural Eng. 2013; 10(6): 066014. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScholten K, Meng E: Materials for microfabricated implantable devices: a review. Lab Chip. 2015; 15(22): 4256–72. PubMed Abstract | Publisher Full Text\n\nDu ZJ, Kolarcik CL, Kozai TDY, et al.: Ultrasoft microwire neural electrodes improve chronic tissue integration. Acta Biomater. 2017; 53: 46–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuitchounts G, Markowitz JE, Liberti WA, et al.: A carbon-fiber electrode array for long-term neural recording. J Neural Eng. 2013; 10(4): 046016. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRajangam S, Tseng PH, Yin A, et al.: Wireless Cortical Brain-Machine Interface for Whole-Body Navigation in Primates. Sci Rep. 2016; 6: 22170. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim S, Bhandari R, Klein M, et al.: Integrated wireless neural interface based on the Utah electrode array. Biomed Microdevices. 2009; 11(2): 453–66. PubMed Abstract | Publisher Full Text\n\nSteinmetz NA, Pachitariu M, Burgess CP, et al.: Recording large, distributed neuronal populations with next-generation electrode arrays in behaving mice. Neuroscience. 2016.\n\nObien ME, Deligkaris K, Bullmann T, et al.: Revealing neuronal function through microelectrode array recordings. Front Neurosci. 2015; 8: 423. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRossant C, Kadir SN, Goodman DFM, et al.: Spike sorting for large, dense electrode arrays. Nat Neurosci. 2016; 19(4): 634–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRey HG, Pedreira C, Quian Quiroga R: Past, present and future of spike sorting techniques. Brain Res Bull. 2015; 119(Pt B): 106–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarmena JM, Lebedev MA, Crist RE, et al.: Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol. 2003; 1(2): E42. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "40099",
"date": "06 Nov 2018",
"name": "Jeffrey V. Rosenfeld",
"expertise": [
"Reviewer Expertise Professor Jeffrey V Rosenfeld is an academic neurosurgeon who has expertise in the development of an implanted bionic vision device (for the brain). He is Director of the Monash Institute of Medical Engineering and a Professor of Surgery at Monash University",
"Australia. Dr Yan T. Wong is an Electrical Engineer and Physiologist whose main research interest it is BCI of non human primate motor and sensory systems. He is also involved in the Bionic Vision Device Development."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a concise review by Mitrasinovic et al. concerning the evolution of Brain Computer Interfaces (BCIs), the current ‘state-of-the art’ of BCIs and the future challenges particularly in relation to electrode design, electrode placement and signal processing.\n\nIt is not immediately apparent to us why Silicon Valley is in the title. More should be made of this in the introduction. Specifically, in what areas do the authors believe that Silicon Valley can help advance BCIs to clinical reality? Are the private entrepreneurs mentioned in paragraph 1 based in Silicon Valley? A greater review of companies involved in BCI research may even be warranted. There are many research groups outside Silicon Valley where advances in computer and electronic engineering for BCIs are also taking place.\n\nNanomaterials are mentioned in paragraph 1 but with no explanation as to how these are used in BCIs or why they would replace materials already in use. The discussion on nanomaterials could be moved to the section discussing improvements in electrode design.\n\nWhile recording electrode numbers have roughly doubled every seven years, is this the case for BCIs? The challenges in recording acutely from non-human primates with non-FDA approved electrodes is very different from the goal of BCIs in humans. At a minimum this difference should be highlighted to not give the readers an unrealistic view of progress.\n\nThe difference between peripheral and central nervous system is highlighted but what about the major differences between non-invasive EEG platforms and invasive implanted electrodes on the surface or penetrating the brain? This should also be highlighted. There is rapid development of non-invasive EEG recording interfaces with the brain which avoid the inconvenience, risks and costs of surgical implantation. Will advances in signal processing increase the accuracy of these non-invasive BCIs and lessen the applications or need for implanted BCIs? It is difficult to imagine how EEG interfaces could compete with the implanted BCIs on the basis of the volume, precision and reliability of the information being transferred.\n\nPET and fMRI are mentioned as modes of BCI in addition to EEG. These are not relevant to developing clinically- and commercially-relevant BCIs for ambulant individuals and we suggest that these modalities be deleted. At present the only way to activate neurons noninvasively is with transcranial magnetic stimulation (TMS).\n\nParagraph 2, page 4: The description of detecting neural signals only being “one half of a BCI” seems a little oversimplified. Not only do you need to record neural signals and decode them but next you need to use these signals to control an output such as a cursor on a screen or robotic limb and then also provide accurate feedback to the patient via electrical stimulation or other means. These challenges should be discussed.\n\nParagraph 2, page 4: We would submit that wired implanted BCIs with a connector penetrating the scalp have no future as a permanently implanted device because of infection risk and inconvenience. Implanted BCIs must become wireless if they are to have any physician or patient uptake. Wireless devices are already described, for example in Lowery et al. (20151), Rajangam et al. (20162) and Vansteensel et al. (20163). We agree there are challenges as the number of electrodes increase.\n\nParagraph 3, page 4: The number of electrodes required to adequately perform certain tasks is not known. Detailed vision, speech processing and fine motor control and the encoding and manipulation of memory would likely require significantly more electrodes than are currently available. However, vast increases in electrode numbers may not be required for all BCIs. The challenges of recording from ever increasing numbers of neurons has been laid out, but the challenges in basic neuroscience in understanding the basic coding of neurons in controlling movement should also be highlighted, for example, little is still known about the coding of control for grasping in the dorsal and ventral pre-motor cortices so this lack of knowledge affects our ability to extract information from these recorded populations of neurons.\n\nParagraph 4, page 4: The four steps that you outline to decode neural signals are focused on spike decoding, whereas the paragraphs before outline techniques that will not result in spike recordings, e.g. EEG, ECOG, endovascular devices. A broader description on decoding algorithms that includes the use of low frequency continuous signals such as the Local Field Potentials (LFP) possibly separated in step called “feature extraction” is needed. Before neural signals can be decoded, algorithms also need to be trained which is not a trivial problem for the target patients.\n\nParagraph 2, column 2, page 4: A large push in BCIs is getting the hardware necessary to be small enough and run on low power to allow patients to be mobile. In the description of parallel processing and Central Processing Units (CPUs), this challenge should be discussed.\n\nThe future design of BCIs using light or magnetic energy as an alternative to electricity could also be included in the discussion.\n\nMention could also be made of the surgical risk of implantation which includes haemorrhage, epilepsy and infection. The mitigation of risk needs to be factored in to the design of the devices and included in the informed consent process. It is important for physicians to work alongside engineers and scientists in the development of BCIs so that they are as safe and practical as possible.\n\nA mention of the many ethical challenges such as informed consent, agency, stigma, equity, neural enhancement, privacy and security of data is important in a general review such as this. For example, there are major ethical challenges to apply BCIs in severely disabled individuals to allow communication or control of assist devices such as locked-in syndrome or advanced amyotrophic lateral sclerosis (ALS).\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "40102",
"date": "15 Nov 2018",
"name": "Ujwal Chaudhary",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe intention and the action performed to support the intention of the human-being is supported by the impeccable coordination of the peripheral nervous system (PNS) and the central nervous system (CNS). Any disruption in this coordination, for example dysfunction of afferent or efferent pathways or injury in the spinal cord or neurological disorders affecting the functioning of brain, affects the normal functioning of the human body. The individual becomes paralyzed and is unable to perform the simple day to day activities of walking or talking. Brain computer interfaces (BCIs) have been developed to help such individuals, where BCIs aim to bypass the dysfunctional pathways and interface the external mechanical or electrical devices with the functioning brain of an individual. Both non-invasive and invasive BCIs have been developed: in non-invasive BCIs non-invasive neuroimaging techniques are being used to acquire brain signals from the surface of the scalp while in invasive BCIs electrodes are placed on the cortical surface of the brain or inserted in the brain. The invasive technique where the electrodes known as microelectrodes are inserted in the brain records spike signals either from a single neuron, known as single unit activity (SUA) or from a group of neurons, known as local field potentials (LFPs).\n\nIn this article the authors have done a great job in summarizing the technical challenges faced by researchers during recording neural signals invasively and have discussed the different approaches developed to solve these problems. Given the recent media interest in the application of BCI as a tool for the means of communication and rehabilitation of paralysed people, several entrepreneurs have invested huge resources in developing BCI. The authors have applauded such efforts and have presented a bright outlook on the effect of such interests on the development of BCI for real world applications.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "40098",
"date": "21 Jan 2019",
"name": "Theresa M. Vaughan",
"expertise": [
"Reviewer Expertise BCI research",
"specifically for communication and sensorymotor rhythm for control."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is a very good summary of the state of play in BCI research as of 2019. It will be of interest to the BCI and general neuroscience communities.\n\nNumerous funding sources with deep pockets, coming from very well established, and successful companies, start-ups, as well as the government signal that the time for BCI breakthroughs is clearly anticipated.\n\nIt may be appropriate to mention some of the already met needs of some users, such as the various P300 Speller systems in use by patients with motor neuron diseases. This population lacks any other communication means, and recent studies with ALS subjects’ use of BCI in their homes have shown reasonable success.\n\nThe authors have touched on the major and significant challenges. The challenge of safe, easily deployed high density electrodes, disposed in some 3D configuration inside the brain is extremely great. The reliable readout of these electrodes, and their connection to some external computing means is similarly difficult. Integrating the computational means with the sensors is clearly desirable but very difficult.\n\nFor fifty years this reviewer has seen technologies compared to that of the transistor, which grew from transistor radios with seven individual transistors, to 100 million transistors per square millimeter today. Approximately a 5e6 fold improvement.\n\nUnfortunately no other technology has had a similar arc, and BCI is unlikely to be similarly blessed with inexpensive scalable improvements.\n\nSome comment on the possibility of using Artificial Intelligence of the type currently in use to learn, master and dominate games of chess and GO.\n\nRecommend indexing.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1327
|
https://f1000research.com/articles/7-461/v1
|
13 Apr 18
|
{
"type": "Review",
"title": "Factors influencing the higher incidence of tuberculosis among migrants and ethnic minorities in the UK",
"authors": [
"Sally Hayward",
"Rosalind M. Harding",
"Helen McShane",
"Rachel Tanner",
"Sally Hayward",
"Rosalind M. Harding",
"Helen McShane"
],
"abstract": "Migrants and ethnic minorities in the UK have higher rates of tuberculosis (TB) compared with the general population. Historically, much of the disparity in incidence between UK-born and migrant populations has been attributed to differential pathogen exposure, due to migration from high-incidence regions and the transnational connections maintained with TB endemic countries of birth or ethnic origin. However, focusing solely on exposure fails to address the relatively high rates of progression to active disease observed in some populations of latently infected individuals. A range of factors that disproportionately affect migrants and ethnic minorities, including genetic susceptibility, vitamin D deficiency and co-morbidities such as diabetes mellitus and HIV, also increase vulnerability to infection with Mycobacterium tuberculosis (M.tb) or reactivation of latent infection. Furthermore, ethnic socio-economic disparities and the experience of migration itself may contribute to differences in TB incidence, as well as cultural and structural barriers to accessing healthcare. In this review, we discuss both biological and anthropological influences relating to risk of pathogen exposure, vulnerability to infection or development of active disease, and access to treatment for migrant and ethnic minorities in the UK.",
"keywords": [
"Tuberculosis",
"UK",
"Migrants",
"Ethnic minorities",
"Socio-economic inequality",
"Stigma"
],
"content": "Introduction\n\nTuberculosis (TB) is a bacterial disease caused by Mycobacterium tuberculosis (M.tb), which most commonly affects the lungs1. M.tb infection is acquired by inhalation of infectious particles released from close contacts2. While 10% of those infected develop active disease, the majority of individuals mount an effective immune response leading to successful containment of M.tb growth; a condition known as latent M.tb infection or LTBI3. Latent infection, which is asymptomatic, ordinarily has a 5–10% lifetime risk of reactivation2. The main symptoms of active disease include persistent coughing (sometimes producing blood), sweating, fever, weakness and weight loss3. Untreated, the 10-year case fatality rate is between 54 and 86% in HIV-negative individuals4. In 15–20% of active cases, and usually in those with immunosuppression, the infection spreads outside the lungs causing extra-pulmonary TB5. LTBI may be diagnosed through cutaneous tuberculin skin test (TST) or interferon-γ release assays (IGRA), while clinically suspected TB disease is evaluated through chest radiograph and diagnostic microbiology for acid-fast bacilli. Effective antibiotic treatment is available, but involves long and complex regimens. Furthermore, rates of multi-drug-resistant TB (MDR-TB) and extensively-drug-resistant TB (XDR-TB) are increasing6.\n\nGlobalisation, conflict and financial reasons have become increasingly important drivers of migration flows, leading to more permanent migrants moving from low/middle income to high-income countries7. In the UK, a significant proportion of foreign-born migrants arrive from former colonies in sub-Saharan Africa and the Indian Subcontinent (ISC)8. Incidence of TB disease is higher among all migrant and ethnic minority groups living in the UK compared with the UK-born population. In 2015, 72.5% of individuals with diagnosed TB disease were foreign-born, with India and Pakistan the most frequent countries of birth among such cases9. While TB rates have been falling slowly across all UK populations since 2011, they remain 15 times higher in the foreign-born than the UK-born population. Furthermore, within the UK-born population, non-white ethnic groups had TB rates 3 to 19 times higher than the white ethnic group9. There is much heterogeneity in both absolute number of cases and incidence rates (per 100,000 of population group) among migrants from different countries and among different ethnic groups. While number of cases is confounded by size of population group, variation in incidence rate reflects varying levels of risk for different migrant and ethnic groups. Migrants from the ISC (India, Pakistan and Bangladesh) and black ethnic groups demonstrate particularly high incidence9.\n\nWe discuss the biological, social and cultural factors relating to risk of pathogen exposure, vulnerability to infection or development of active disease, and access to treatment which contribute to the increased incidence of TB in migrant and ethnic minorities in the UK (Figure 1).\n\n\nEpidemiology\n\nThe higher burden of TB observed among foreign-born individuals in the UK could be due to arrival of migrants with active TB, reactivation of remotely-acquired LTBI post-arrival, or local transmission10. Meta-analyses of screening for active TB at entry have indicated that only a small proportion (~0.35%) of immigrants have active TB at time of arrival in the EU/EEA11,12. Since 2012, the UK Home Office has required pre-arrival screening for active pulmonary TB disease for all long-term visa applicants from endemic countries; those diagnosed with active disease are denied a medical clearance certificate13. Thus arrival of migrants with active TB is not thought to contribute significantly to the overall burden of disease among foreign-born individuals in the UK; rather several studies suggest a more prominent role for the reactivation of remotely-acquired LTBI post-arrival10,14,15. In the initial years following arrival in a lower incidence setting, migrants with LTBI have a higher risk of reactivation than the host population16–18.\n\nLocal transmission within immigrant communities in the UK may also contribute to the higher incidence of TB cases observed in migrants and ethnic minorities. Such groups are more likely to live in densely-populated areas with a high concentration of their ethnic community, which may foster spread of TB, particularly given the mode of transmission19. Moreover, Bakhshi suggests that larger household size among migrants and ethnic minorities - perhaps due to cultural factors favouring a multi-generational rather than nuclear family structure - increases M.tb transmission, as approximately one-third of household contacts will become infected in a household with an active TB case20.\n\nIn order to establish the relative importance of local transmission versus reactivation of LTBI, molecular fingerprinting and typing techniques have been applied since the 1990s. Genomic clusters are assumed to represent epidemiologically linked chains of recent transmission, whereas unique isolates represent reactivational disease21. Early findings were conflicting, likely due to the inability of such techniques to reliably distinguish past and recent transmission22. The advent of whole-genome sequencing of M.tb offered additional resolution, and in one such study of Oxfordshire TB cases in 2007–2012, those born in a high-incidence country were less likely to be part of a genomic cluster than those born in a low-incidence country (especially the UK), even when adjusting for social risk factors23. Furthermore, Aldridge et al. identified only 35 of over 300,000 migrants screened prior to entry into England, Wales and Northern Ireland as assumed index cases, defined as the first case in a genomic cluster24. These findings suggest that reactivation of LTBI is more important in explaining the higher incidence of TB among migrants than exogenous infection due to local transmission.\n\n\nDifferential exposure\n\nRates of active TB disease diagnosed after arrival in the UK correlate with TB incidence in the country of origin24, indicating that differential exposure among migrants is a key factor influencing TB incidence in foreign-born populations. 13% of foreign nationals in the UK are from a country where TB incidence is ≥250 cases per 100,0008. After the Second World War, substantial numbers of migrants arrived from the Commonwealth and former British Empire; particularly from the ISC. These movements were driven by factors such as Britain’s labour shortages for post-war reconstruction and political turbulence after decolonisation, for example following the creation of Pakistan25. Many Commonwealth countries have high TB incidences: the highest incidences globally are found in Africa (275 per 100,000 population in 2015) and South East Asia (246 per 100,000)6. Migrants from these countries are at a greater risk of having been exposed to M.tb and contracting LTBI. Indeed, of the 11 countries that were each the source of more than 2% of foreign-born cases in 2001–2003 (collectively accounting for 73% of foreign-born cases), all were in South Asia or sub-Saharan Africa26.\n\nIt is useful to consider migration to the UK from the perspective of transnationalism, defined as “the process by which immigrants forge and sustain multi-stranded social relations that link together their societies of origin and settlement”27. From the 1920s until recently, migration research has tended to focus on the incorporation of migrants in their destination country rather than continued ties with their country of origin. However, since the 1990s, “the transnational turn” has provided “a new analytic optic”28. From this perspective, given that migrants maintain ties across the borders of nation-states, return visits to their country of origin and overseas visitors to the UK may result in increased exposure to M.tb. Where data was available, 23.2% of TB cases between May and December 2015 in England had travelled outside the UK (excluding Western Europe, US, Canada, New Zealand and Australia) in the two years before diagnosis, and 6.8% had received an overseas visitor9. Such movements are increasing as globalisation leads to the intensification of international interconnectedness: in the UK, travel to visit family and friends abroad increased by 67% between 1998 and 200729. In 2007, UK residents made nearly 900,000 trips to the ISC for the purpose of visiting friends and family29.\n\nThere is evidence to suggest that travel to countries with high TB incidence increases the risk of acquiring LTBI, with greater risk associated with more prolonged travel and higher TB burden in the destination country30. Such individuals are then at risk of developing active disease after returning to the UK. A study in Blackburn, Hyndburn and Ribble Valley found that 12.8% of active cases among Indian, Pakistani and Bangladeshi ethnic groups occurred within 3 years of revisiting the ISC31. Furthermore, a case-control study in Liverpool found that TB cases were 7.4 times more likely to have recently received visitors from abroad32. A case-control study of patients of ISC ethnic origin in North West England found a weak association between revisiting the ISC and TB cases within the following 3 years33.\n\n\nBCG vaccination\n\nMycobacterium bovis Bacille Calmette-Guerin (BCG) is the only currently available vaccine against TB. BCG confers reliable protection against disseminated forms of TB such as miliary disease and meningitis in infants34,35. However, protection against pulmonary TB (the most common form of disease) varies considerably by geographical region36. While the UK has one of the highest levels of BCG efficacy (~80%), a low level, or complete lack of, protection has been reported in many migrant countries of origin such as India36–38. The problem may be further confounded by limited access to vaccines and other healthcare in low-income country settings, but prevalence of M.tb infection in endemic countries remains high even where there is good BCG coverage39,40.\n\nIt has been hypothesised that exposure to non-tuberculous environmental mycobacteria (NTM), which increases with proximity to the equator, plays a central role in limiting BCG efficacy41. Individuals may develop an immune response to NTM that either ‘masks’ or ‘blocks’ the ability of BCG to induce a protective response41–44. In a trial in Chingleput, India, 95% of individuals were PPD positive by 15–20 years of age45. In a trial in Malawi where there is high NTM exposure and poor BCG efficacy, individuals with lower immune responses to NTM showed greater IFN-γ responses to BCG46. Furthermore, in mice sensitised with NTMs, the protective effect of BCG (but not a TB subunit vaccine) was considerably reduced47. The low levels of BCG protection found in countries with high TB incidence likely contribute to the prevalence of LTBI among migrant populations.\n\n\nGenetic susceptibility\n\nThe idea of a heritable component to TB was suggested as early as 1886 by Hirsch: “That phthisis [TB] propagates itself in many families from generation to generation is so much a matter of daily experience, that the severest sceptic [sic] can hardly venture to deny a hereditary element in the case”48. It is now well-established that host genetic factors can contribute to TB susceptibility and resistance49. Early studies demonstrated that monozygotic twins have a higher risk of developing active TB compared with dizygotic twins50,51, and several relevant loci have since been identified using candidate gene studies and genome-wide association studies52–57. Variation in susceptibility to M.tb infection and progression to active disease has been observed in different ethnic and geographic populations58,59. A study of >25,000 residents in racially-integrated nursing homes in Arkansas, USA, found that 13.8% of African-American compared with 7.2% of Caucasian residents had evidence of a new M.tb infection60. Furthermore, in a study of three TB outbreaks in two prisons, African-Americans had approximately twice the relative risk compared with Caucasians of becoming infected with M.tb60. However, although these studies largely controlled for environmental factors, confounders such as differing vitamin D levels cannot be ruled out.\n\nMore recently, genotyping technologies have supported a role for genetic ancestry in TB susceptibility. A case-control study genotyped a panel of ancestry informative markers to estimate the ancestry proportions in a South African Coloured population. African ancestry (particularly San ancestry) was higher in TB cases than controls, and European and Asian ancestries were lower in TB cases than controls58. However, a limitation is that the study did not adjust for socioeconomic confounders. Differences in alleles that encode components of the immune response provide a possible mechanism for ethnic variation in TB susceptibility. Indeed, a study of African and Eurasian pulmonary TB patients in London indicated ethnic differences in the host inflammatory profile at presentation including lower neutrophil counts, lower serum concentrations of CCL2, CCL11 and DBP, and higher serum concentrations of CCL5 in those of African ancestry61. These differences became more marked following initiation of antimicrobial therapy, and were associated with ethnic variation in host genotype but not M.tb strain61.\n\nHost-pathogen co-evolution is a likely driver of variation in TB susceptibility in different human populations62. M.tb has been co-evolving with humans for millennia, with evidence that humans were exposed before the Neolithic transition63. The differential susceptibility of particular populations may be based on M.tb exposure history, with long-term exposure resulting in strong positive selection for resistance-related alleles. There is evidence from European colonialism that previously underexposed populations are more susceptible to TB, which played a large part in the deaths of many Qu’Appelle Indian and Inuit populations in Canada64,65. Similarly, in contrast to Europeans, Southern African populations have only been exposed to modern M.tb strains relatively recently58. It has been suggested that selection pressure for resistance would have been strongest in areas of high population density. Accordingly, duration of urban settlement is correlated with the frequency of the SLC11A1 1729 + 55del4 allele which plays a role in natural resistance to intracellular pathogens including M.tb66. Lower rates of TB among those of European ancestry could be due to centuries of exposure in densely populated settlements driving the evolution of increased resistance.\n\n\nVitamin D deficiency\n\nIt has long been recognised that low vitamin D levels are associated with active TB, with sunlight exposure in sanatoria and direct administration of vitamin D commonly used as treatments prior to the advent of antibiotics67. Today, evidence supporting a link between TB and vitamin D deficiency is accumulating68–71. A meta-analysis indicated a 70% probability that, when chosen at random from a population, an individual with TB disease would have a lower serum vitamin D level than a healthy individual72, although the direction of causality is not clear. It has been demonstrated that 1,25(OH)2D, the active metabolite of vitamin D, promotes the ability of macrophages to phagocytose M.tb and enhances the production of cathelicidin LL-37, an antimicrobial peptide that has direct bactericidal activity and attracts other immune cells to the site of infection73. There is evidence that a drop in serum vitamin D compromises the immune response and can lead to reactivation of LTBI74.\n\nVitamin D can be acquired from the diet or endogenously synthesised in the skin by the photolytic action of solar UV light on the precursor molecule 7-dehydrocholesterol75,76. Certain migrant and ethnic minority groups are at a greater risk of vitamin D deficiency77–79. Indeed, vitamin D levels have been shown to be lower among Asian children living in England compared with children of the same age in the general population80. Vegetarians are at increased risk of vitamin D deficiency since oily fish are a major dietary source81, and Hindu Asians are more frequently vegetarian due to socio-religious factors82. A study of Asian immigrants with TB disease in Wandsworth found that Hindus were at higher risk of contracting TB than Muslims83. Darker skin pigmentation also increases risk of deficiency, as melanin reduces the efficiency of vitamin D synthesis from UVR79,84. Hindu women in particular have been found to be at high risk, as in a 1976 study by Hunt et al. they spent on average only 2.5 hours a week outside for cultural reasons, whereas men were exposed to sunlight travelling to work85.\n\n1,25(OH)2D mediates its immune activity through binding to the vitamin D receptor (VDR) on target cells; thus receptor abnormalities as well as vitamin D deficiencies may impair host immunity to M.tb86. Some polymorphisms in the VDR gene increase susceptibility to TB, while others increase resistance87,88. In a systematic review of seven studies comparing the prevalence of VDR polymorphisms in TB patients and healthy controls, BsmI and FokI VDR polymorphisms were found to increase TB susceptibility89. The VDR gene shows striking genetic variation in allele frequency between populations90. Moreover, certain polymorphisms play different roles in different populations89, although further research is required to elucidate how this translates into variation in patterns of susceptibility and resistance to TB in different ethnic groups. Epigenetic variation in the VDR gene in different ethnic groups, arising from differential exposure to environmental factors, may influence gene regulation and therefore contribute to differential TB susceptibility. Indeed, methylation variable positions at the 3' end of VDR have been identified that are significantly correlated with ethnicity and TB status91.\n\n\nCo-morbidities\n\nRisk of progression to active TB disease is increased in those with conditions that impair immunity, such as diabetes mellitus (DM), human immunodeficiency virus (HIV) and chronic kidney disease (CKD)92. Certain migrant and ethnic groups are at a higher risk of these conditions; diabetes mellitus disproportionately affects South Asians whereas HIV is more prevalent among those of African origin, and chronic kidney disease affects both groups.\n\na) Diabetes mellitus\n\nClinicians have noted a possible association between TB and DM since the early 20th century93. More recently, a meta-analysis of 13 cohort studies found that DM increases the risk of active TB 3.11-fold94. A causal relationship between DM and impaired immunity to TB is supported by studies of diabetic mice, which have higher bacterial loads when infected with M.tb than non-diabetic mice95. DM-TB comorbidity increases both the risk of new and reactivational TB96. Various mechanisms have been suggested including impaired immune function due to DM, complications of DM, and deficiencies in vitamins A, C and D associated with both TB and DM risk97.\n\nType 2 DM (T2DM) and associated risk factors, especially obesity, show marked associations with ethnicity98. In the UK, obesity and T2DM risk is significantly higher among South Asians (including those of ISC origin), and moderately higher among black African-Caribbeans compared with white Europeans99. The prevalence of DM among South Asians in England was 14% in 2010; approximately double the 6.9% prevalence in the general population100. Some studies have suggested that ethnic differences in T2DM can be explained by differences in socio-economic status101, while others do not support this98. It is clear there are complex genetic and environmental explanations for ethnic differences in T2DM prevalence, which are beyond the scope of this review (for example, see 102).\n\nb) Human immunodeficiency virus\n\nInfection with HIV is the strongest known risk factor for the development of TB disease103. TB-HIV co-infection synergistically worsens both conditions, leading it to be termed ‘the cursed duet’104. HIV increases both the risk of rapid progression to active disease following infection and reactivation of LTBI, with an increased risk of TB throughout the course of HIV-1 disease105,106 and incidence rate ratios >5 when averaged across all levels of immunodeficiency107. The depletion of CD4+ T cells associated with HIV-1 infection is thought to play a major role in the increased risk of TB and its extra-pulmonary dissemination in infected individuals, as M.tb infected macrophages require CD4+ T cells to augment intracellular clearance108. Furthermore, peripheral blood lymphocytes of HIV-positive patients produce less interferon-γ when exposed to M.tb in vitro than those of HIV-negative patients109. These and other possible immune mechanisms such as chronic inflammation promoting an immunoregulatory phenotype and attenuation of phagocytosis have been recently reviewed110.\n\nA systematic review on the prevalence, incidence and mortality of HIV-TB co-infection in Europe observed a disproportionate vulnerability of migrants to co-infection across studies111. Given that only 3.1% of TB cases in England in 2014 involved co-infection with HIV9, TB-HIV co-infection cannot be considered a major driver of higher TB incidence among migrants and ethnic minorities in the UK. However, it undoubtedly plays a role in explaining the higher incidence rates among those of African origin. Between 2010 and 2014, 87% of TB-HIV co-infected cases in England were foreign-born, of which 77.5% were born in sub-Saharan Africa9. This reflects the global distribution of TB-HIV co-infection. In the WHO African region, 38% of new TB cases were co-infected107. In turn, the global pattern of TB-HIV co-infection reflects the global distribution of HIV: 69.5% of all people living with HIV are in the WHO African region112.\n\nc) Chronic kidney disease\n\nThe association between CKD and TB was first reported in 1974113, and has been subsequently confirmed by several studies114–116. The mechanism is thought to be impaired immunity: CKD is associated with functional abnormalities in various immune cells, such as B and T cells, monocytes, neutrophils, and natural killer cells117. This increases the risk of both newly acquired and reactivated TB. Furthermore, immunosuppressive medications in kidney transplant patients are aimed at T cell-mediated immunity, which is central to maintaining TB latency in LTBI individuals118. Patients with CKD are 10–25 times more likely to develop active TB119.\n\nEthnic minorities in the UK are at a 3–5 times higher risk of developing CKD120. A study in London from 1994–1997 found that the incidence rate among white Caucasians was 58/million adult population per year, 221 among South Asians, and 163 among African-Caribbeans121. More recently, in a study of CKD patients with TB in South East London, 74% were born outside of the UK122. CKD also interacts with other risk factors that contribute to higher incidence of TB among migrants and ethnic minorities. DM patients are 4–5 times more likely to have CKD123, CKD patients are more likely to have low vitamin D levels124, and CKD is a complication associated with HIV125. Furthermore, CKD disproportionately affects economically disadvantaged groups, possibly due to the direct impact of poverty or malnutrition, or indirect effects of poverty-associated co-morbidities including DM and HIV126. Given the complex interactions between multiple risk factors, it is difficult to establish the direction of causality.\n\n\nSocio-economic status\n\nThe association between deprivation and TB has long been recognised, leading it to be dubbed a “social disease”127 and “poverty’s penalty”128. There is a strong socio-economic gradient in TB burden between and within countries and communities, with economically disadvantaged groups having the highest risk129. The importance of social factors in TB risk is supported by McKeown’s observation that a considerable proportion of the decline in TB-associated mortality occurred before the advent of antibiotics and the BCG vaccine, implicating improved living standards and nutrition as the main drivers130. Szreter contests the McKeown thesis, emphasising the key role of public health measures in regulating the urban environment131. Either way, it is clear that TB disproportionately affects the socially and economically marginalised, with a recognised role for poverty, homelessness, and overcrowding in both the spread of infection and number of active cases132. In the UK in 2009, TB cases among the homeless were 20 times higher than the general population at 300 cases per 100,000133. In a study of London districts, the TB notification rate increased by 12% for every 1% rise in the number of people living in overcrowded conditions134.\n\nThe wider social determinants of health are entwined with ethnicity, meaning that ethnic socio-economic disparities throughout the life course often lead to health inequalities135. There are marked economic inequalities between ethnic groups in the UK, with both Asian and black ethnic groups having lower employment probability than the population average136. Alongside economic issues of unemployment, low income and poor working conditions, migrants and ethnic minorities are also more likely to face problems of homelessness, poor housing, and overcrowding137. Foreign nationals accounted for 13% of the general UK population in 2015138, but 20% of the homeless population139. In 2011, dwellings with a Household Reference Person (HRP) from a minority ethnic group represented 16.1% of all households in England and Wales, but 47.9% of overcrowded households. The most commonly overcrowded households were those with a Bangladeshi HRP (30.2% overcrowded), followed by Pakistani (22.3%) and black-African (21.8%)140.\n\nKing rejects what he terms ‘essentialist’ explanations for the higher TB incidence of migrants and ethnic minorities, which claim that TB disproportionately affects certain groups due to intrinsic differences, which may be biological, genetic, physiological or cultural. Instead he suggests that “Disparities in health that may at first seem to arise from essential racial or ethnic differences are often in fact the result of contingent socioeconomic differences”132. Similarly, Farmer rejects psychological or cultural explanations, emphasising that “tuberculosis is inextricably tied to poverty and inequality”. He criticises studies that neglect to address the political-economic forces that shape TB distribution, and calls for anthropologists to pay more attention to structural violence (the systematic ways in which social structures disadvantage individuals) and social inequality141.\n\nKing and Farmer claim that, given that socio-economic status affects TB risk, biological and genetic approaches are largely irrelevant132,141. Similarly, the medical anthropologist Singer criticised the use of adaptation as a conceptual tool on the basis that such explanations ignore how the political economy shapes the environment that humans adapt to. She argues that differential mortality between socio-economic groups is “unnaturally selected” by the conditions created to further the interests of the dominant class142. However, as discussed, there is evidence to suggest that adaptation resulting from host-pathogen co-evolution influences direct and indirect genetic susceptibility to TB infection and progression to active disease. Perhaps as Mason et al. suggest, a more constructive approach is required, recognising that “the social model is an important complement to the biomedical model”143.\n\nGiven the complex association between ethnicity and socio-economic status, it is hard to disentangle the extent to which socio-economic disadvantage influences TB incidence in migrant and ethnic minority populations144. One study in children from Leeds found that overall, ethnicity explained a high proportion of TB incidence independently of deprivation and population density, although for non-South Asian children, the strongest risk factor was deprivation145. Similarly, a study in Liverpool suggested an association between ethnicity and TB incidence that was independent of deprivation level146, and a study investigating TB trends in England in 1999–2003 indicated that affluent ethnic minority groups are still at greater risk144. It has been suggested that the absence of a strong correlation between deprivation and M.tb infection in the South Asian community may be due to the smaller relative differences in deprivation within this group than across the general population147. Indeed, a study in Newham found an association between the proportion of non-white residents and TB diagnosis in each ward, but no association with deprivation as the borough as a whole was deprived148.\n\nThere is significant heterogeneity in the role that social risk factors play in increasing TB risk in different migrant and ethnic groups. Among UK-born cases notified in 2010–2015, 33.0% of those in the black-Caribbean ethnic group had at least one social risk factor (homelessness, imprisonment, drug or alcohol misuse), higher than any other ethnic group9. 19.2% of black-Caribbean cases were drug users, and 18.4% had a history of imprisonment. The countries of origin with the highest number of homeless TB cases were Somalia, at 84 cases, and Eritrea, at 71 cases9. This suggests that socio-economic disadvantage may play a particularly important role in explaining higher TB incidence among the black-African and black-Caribbean ethnic groups.\n\n\nExperiences of migration\n\nThe difficulties faced during and shortly after migration may increase risk of progression to active disease by compromising immunity, including poor nutrition, concurrent poor health, socioeconomic marginalisation, and the stress of relocation132. In an anthropological study of illegal Chinese immigrants with TB in New York, it was found that migrants often experience shortages of food and water during long migratory journeys. Upon arrival, temporary residence in detention centres or illegal refuges is associated with overcrowding and malnutrition149. Migrants then face additional challenges including loss of a social support network, communication issues, discrimination, and acculturation150. Ho calls for a focus on the macro-level structural forces that shape TB risk on migratory journeys, such as a lack of government regulation and exploitation by human traffickers149.\n\nPsychological effects include higher rates of anxiety among refugees and asylum seekers compared with the general population or other migrant groups151, and poorer mental health in forced compared with voluntary migrants152. Furthermore, Africans in Britain are at a higher risk of mental illness than non-Africans153, and survey data suggests that immigration is a primary cause of mental distress in about 40% of Africans in the UK154. It has been suggested that the psychological stress and depression associated with migration may play a role in increasing risk of progression to active disease, potentially via neuroendocrine pathways or a negative effect on the cell-mediated immune system155.\n\nImportantly, rather than transporting active cases of TB across national borders, the majority of immigrant cases of active TB disease develop following arrival in the UK10. This supports King’s assertion that “The higher rate of TB among immigrants owes as much to the hardships they face during and shortly after migration, as it does to their country of origin”132. Further research is required to establish the extent to which stress and adverse migratory journeys affect specific migrant groups. However, experiences of migration are likely to contribute at least in part to the higher rates of active TB among migrants compared with UK-born ethnic minorities and the general population, especially for those who are marginalised or are travelling illegally.\n\n\nTreatment-seeking\n\nKnowledge about TB among migrants and ethnic minorities is shaped by cultural beliefs, often arising from experiences in the country of origin156. Certain ideas, including misconceptions about TB causation, transmission and risk, can act as barriers to clinical treatment. Gerrish et al. suggest that “TB is not just a medical disease to be treated with antibiotic therapy but an entity with historical and cultural roots”157. Several studies have identified widespread misconceptions about TB causation and transmission among migrant communities, and a limited understanding of LTBI in particular150. The disease has been variously erroneously attributed to climate conditions158, poisoning, pneumonia159, exposure to chemical products160, and witchcraft161. Several members of a focus group of Somali women believed TB to be a punishment for past ill deeds156.\n\nMigrants may feel a false sense of having ‘left behind’ the high risk of TB in their country of origin150, and TB may be considered by migrants to be a different, more severe, disease in their country of origin162. In some cases, TB may be thought of as incurable due to poor health services in low-income countries163. Moreover, immigrants may favour traditional systems of care and healing over Western medicine upon arrival164, making them more likely to turn to traditional folk healers, self-diagnosis or self-medication before accessing public healthcare facilities137. Cultural beliefs that lead to delays in treatment-seeking and patient non-compliance may increase the risk of TB transmission within such communities.\n\nConversely, some have suggested that the cultural beliefs held by migrants are not barriers to treatment-seeking, but rather promote such behaviour. The higher prevalence of TB in migrants’ countries of origin could lead to greater awareness; as Bakhshi argues, “people born in developing countries are too familiar with the disease to neglect it”20. Moreover, Ho describes how traditional Chinese medical beliefs are often complementary to clinical TB treatment in New York, such as through the use of traditional Chinese medicine to reduce the side effects of anti-TB drugs149.\n\nTB-related stigmatisation of immigrants has been reported in multiple studies (reviewed in 150). Stigma is defined as “the situation of the individual who is disqualified from full social acceptance”, and is therefore “reduced in our minds from a whole and usual person to a tainted, discounted one”165. Some cultures consider TB to be sinful and dirty156. The feelings of guilt and shame166 and risk of rejection and discrimination167 that may result from stigmatisation affect attitudes towards diagnosis, treatment and prevention, and therefore hinder control of TB and facilitate its transmission within certain migrant and ethnic groups168. Sufferers may hide their illness to avoid stigma and discrimination and protect personal or family dignity161. One study indicated that stigma prevented some immigrants from sharing information with their doctors, even TB-related symptoms169. Furthermore, patients may be less likely to identify contacts due to concerns about social repercussions, meaning that subsequent preventable TB cases may occur157. Feelings of stigma produced by attitudes in the country of origin are likely to be exacerbated by the negative stereotyping of migrant groups as ‘dirty’ or ‘diseased’ due to the association of TB with immigrants, which may lead to xenophobia and discrimination of sufferers163, termed ‘sociomedical racism’ by McBride (1991)170.\n\nThe Somali community in the UK provides an informative case study of the socio-cultural meaning and perceived consequences of TB. In Somalia, TB is associated with extreme stigma and social isolation. In a focused ethnography of Somali-born UK residents, Gerrish et al. found that interviewees tended to base their attitudes towards TB on those prevalent in Somalia157. The stigma associated with TB led to expectations of social isolation, shame and loss of self-worth, sometimes extending to the whole family. Although most had an understanding that TB is contagious, it was also commonly believed that people remain infectious after treatment, as TB was often thought to be hereditary and therefore impossible to eradicate. This led to fears that friends would not resume normal social interactions after treatment, and that a diagnosis would jeopardise marriage prospects. Therefore, sufferers tended to isolate themselves or conceal their illness. In reality, anticipated consequences tended to be worse than actual experiences of discrimination, but felt stigma was nonetheless a powerful deterrent to disclosing illness, leading to delays in diagnosis and treatment157.\n\n\nAccess to healthcare\n\nMigrants may have difficulties establishing ‘entitlement’ to good healthcare171. For example, a study in the UK found that only 32.5% of new migrants who were instructed to register with a GP had done so, and the migrant groups with the smallest proportion registered were likely to have greatest need172. This is consistent with the Inverse Care Law, that those with the greatest need are least able to access healthcare services173. Various studies have found that migrants face barriers in accessing healthcare services for TB diagnosis or treatment. These include lack of awareness of the local health system, including availability of free services174, language barriers175, and fears about loss of privacy due to the use of interpreters162. Therefore, even in cases where there are minimal geographic or economic barriers to accessing health facilities, there are often racial, linguistic and cultural barriers to using these facilities effectively and adhering to treatment regimens176.\n\nStudies have found that migrants face various structural barriers to accessing healthcare services, such as transport difficulties associated with poor services in deprived areas156, and rigid opening hours for medication that do not fit with the working hours and lifestyles of patients177. Moreover, economic barriers include not only direct costs associated with illness, such as the costs of repeated journeys to clinics for treatment, but also indirect costs including losing a job or being evicted by a landlord178. Farmer criticises anthropological investigations for conflating structural violence with cultural difference, tending to exaggerate the role of patient agency and minimise the role of poverty and the barriers that it creates to accessing adequate care and completing treatment141. Nevertheless, whether structural or cultural, barriers to healthcare access among migrant and ethnic groups can lead to delays in diagnosis and treatment, resulting in increased transmission and incidence.\n\nAccess to healthcare services varies across different migrant populations. Although treatment of TB is free for all in the UK, refugees and asylum seekers have poorer access to health services179. Irregular residence status is likely to lead to significant delays in seeking medical assistance, due to uncertainties surrounding entitlement to services and fears of deportation, since TB patients can be legally deported while receiving ongoing treatment180. Furthermore, irregular migrants may face difficulties in completing long-term TB treatment, which involves repeated consultations, if they do not have housing or employment and are short-term residents. Irregular migrants are also less likely to be willing to provide details of their migratory route162 and provide information about contacts161. Contact tracing is further compromised given the high mobility of migrants, and the fact that many do not reside at their official address, but with family and friends162.\n\nConversely, there is evidence to suggest that UK-born cases experience longer delays from symptom onset to commencement of treatment than foreign-born cases9. Moreover, TB treatment completion is actually marginally higher in migrants (85%) than the UK-born (81%)181. However, these observations are problematic in that UK-born TB cases are often drawn from homeless individuals, problem drug users and prisoners, and so are frequently lost to follow-up and poorly adherent182; they are not representative of the UK-born population as a whole. Furthermore, migrants are at a higher risk of contracting TB due to the various unique factors discussed (including genetics, vitamin D deficiency, co-morbidities, and experiences of migration), which do not apply to the UK-born, making socio-economic issues the key driver of TB incidence in this group. Indeed, in 2016, nearly three times as many UK-born cases (22%) as foreign-born cases (8%) had at least one social risk factor (drug misuse, alcohol misuse, homelessness, or imprisonment)9, which is incongruent with the higher overall rates of deprivation in foreign-born compared with UK-born populations23.\n\n\nConclusions\n\nIt is a common misconception that migrants have a higher incidence of TB disease compared with the general population simply because they ‘import’ it from abroad. Bakhshi suggests that they “present a tuberculosis picture from the country of origin and not the United Kingdom where the disease eventually manifests”20. Indeed, differential pathogen exposure can explain much of the higher incidence of TB among migrants and ethnic minorities, due to both pre-migration residence in high-incidence countries and maintenance of transnational links with the country of birth or ethnic origin. However, positing this as the sole driver fails to address the complex interplay of factors driving the vulnerability of particular migrant and ethnic groups to infection and progression to active disease. These include genetic susceptibility, vitamin D deficiency due to climatic and dietary factors, co-morbidities including DM, HIV and CKD, socio-economic deprivation, and factors linked to the experience of migration itself. Furthermore, certain migrant and ethnic groups face barriers to accessing treatment including cultural differences in treatment-seeking behaviours, stigmatisation of sufferers, and barriers to healthcare access. As stated by Offer et al., “TB in ethnic minorities does not occur in isolation but against a backdrop of socioeconomic, political and cultural context that affects their knowledge, attitudes and behaviours”147. The resultant delays in diagnosis and treatment lead to increased transmission and incidence in these communities.\n\nIn this way, there are factors disadvantaging migrants and ethnic minorities at each stage of the disease, relating to risk of pathogen exposure, vulnerability to infection, development of active disease, and access to treatment. Although heterogeneity between and within broad migrant and ethnic groups leads to variation in risk at each of these stages, there is a net effect of higher incidence among migrants and ethnic minorities compared with the general UK population. It is important to understand the complex and multifactorial drivers of this disparity in order to implement effective policies for tackling TB in these vulnerable groups. Currently, migrants from countries with high TB incidences are screened for active TB before entry to the UK. However, to complement such measures, which only consider the driver of differential pathogen exposure, more consideration is needed regarding policies that address the factors making migrants and ethnic minorities more vulnerable to reactivation of LTBI following their arrival in the UK. This might include vitamin D supplementation, measures targeting co-morbidities, and policies that promote socio-economic equity and migrant rights. In order to reduce delays in diagnosis and treatment, and thereby minimise transmission within migrant and ethnic minority communities, increased health education on TB causation, risk and transmission is required, as well as tackling stigmatisation of vulnerable groups. It is also important to raise awareness of migrants’ entitlement to diagnosis and treatment through the NHS, alongside reducing cultural and economic barriers to its access.\n\n\nData availability\n\nNo data are associated with this article.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nMcShane H and Tanner R are members of the VALIDATE Network.\n\n\nReferences\n\nTuberculosis Fact Sheet 2016. WHO (World Health Organisation). 2016.\n\nCruz-Knight W, Blake-Gumbs L: Tuberculosis: an overview. Prim Care. 2013; 40(3): 743–56. PubMed Abstract | Publisher Full Text\n\nFogel N: Tuberculosis: a disease without boundaries. Tuberculosis (Edinb). 2015; 95(5): 527–31. PubMed Abstract | Publisher Full Text\n\nTiemersma EW, van der Werf MJ, Borgdorff MW, et al.: Natural history of tuberculosis: duration and fatality of untreated pulmonary tuberculosis in HIV negative patients: a systematic review. PLoS One. 2011; 6(4): e17601. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGolden MP, Vikram HR: Extrapulmonary tuberculosis: an overview. Am Fam Physician. 2005; 72(9): 1761–8. PubMed Abstract\n\nWHO: Global Tuberculosis Report 2016. Geneva: World Health Organisation; 2016. Reference Source\n\nInternational Organisation for Migration: World Migration Report 2015. Geneva: IOM2015. Reference Source\n\nGilbert RL, Antoine D, French CE, et al.: The impact of immigration on tuberculosis rates in the United Kingdom compared with other European countries. Int J Tuberc Lung Dis. 2009; 13(5): 645–51. PubMed Abstract\n\nPHE: Tuberculosis in England: 2016. London: Public Health England. 2016. Reference Source\n\nPareek M, Greenaway C, Noori T, et al.: The impact of migration on tuberculosis epidemiology and control in high-income countries: a review. BMC Med. 2016; 14: 48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArshad S, Bavan L, Gajari K, et al.: Active screening at entry for tuberculosis among new immigrants: a systematic review and meta-analysis. Eur Respir J. 2010; 35(6): 1336–45. PubMed Abstract | Publisher Full Text\n\nKlinkenberg E, Manissero D, Semenza JC, et al.: Migrant tuberculosis screening in the EU/EEA: yield, coverage and limitations. Eur Respir J. 2009; 34(5): 1180–9. PubMed Abstract | Publisher Full Text\n\nPHE: TB screening for the UK.. Public Health England and UK Home Office. 2013. Reference Source\n\nFok A, Numata Y, Schulzer M, et al.: Risk factors for clustering of tuberculosis cases: a systematic review of population-based molecular epidemiology studies. Int J Tuberc Lung Dis. 2008; 12(5): 480–92. PubMed Abstract\n\nChoudhury IW, West CR, Ormerod LP: The outcome of a cohort of tuberculin-positive predominantly South Asian new entrants aged 16–34 to the UK: Blackburn 1989–2001. J Public Health (Oxf). 2014; 36(3): 390–5. PubMed Abstract | Publisher Full Text\n\nLillebaek T, Andersen AB, Dirksen A, et al.: Persistent high incidence of tuberculosis in immigrants in a low-incidence country. Emerg Infect Dis. 2002; 8(7): 679–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacPherson DW, Gushulak BD: Balancing prevention and screening among international migrants with tuberculosis: population mobility as the major epidemiological influence in low-incidence nations. Public Health. 2006; 120(8): 712–23. PubMed Abstract | Publisher Full Text\n\nMarks GB, Bai J, Stewart GJ, et al.: Effectiveness of postmigration screening in controlling tuberculosis among refugees: a historical cohort study, 1984–1998. Am J Public Health. 2001; 91(11): 1797–9. PubMed Abstract | Free Full Text\n\nDorsett R: Ethnic minorities in the inner city. Findings, 1998; York: Joseph Rowntree Foundation. 1998. Reference Source\n\nBakhshi S: Tuberculosis in the United Kingdom: A Tale of Two Nations. Leicester: Troubador Publishing Ltd. 2006. Reference Source\n\nMurray M, Nardell E: Molecular epidemiology of tuberculosis: achievements and challenges to current knowledge. Bull World Health Organ. 2002; 80(6): 477–82. PubMed Abstract | Free Full Text\n\nGlynn JR, Bauer J, de Boer AS, et al.: Interpreting DNA fingerprint clusters of Mycobacterium tuberculosis. European Concerted Action on Molecular Epidemiology and Control of Tuberculosis. Int J Tuberc Lung Dis. 1999; 3(12): 1055–60. PubMed Abstract\n\nWalker TM, Lalor MK, Broda A, et al.: Assessment of Mycobacterium tuberculosis transmission in Oxfordshire, UK, 2007–12, with whole pathogen genome sequences: an observational study. Lancet Respir Med. 2014; 2(4): 285–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAldridge RW, Zenner D, White PJ, et al.: Tuberculosis in migrants moving from high-incidence to low-incidence countries: a population-based cohort study of 519 955 migrants screened before entry to England, Wales, and Northern Ireland. Lancet. 2016; 388(10059): 2510–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBhopal RS: Migration, Ethnicity, Race, and Health in Multicultural Societies. 2nd ed. Oxford: Oxford University Press, 2014. Reference Source\n\nFrench CE, Antoine D, Gelb D, et al.: Tuberculosis in non-UK-born persons, England and Wales, 2001–2003. Int J Tuberc Lung Dis. 2007; 11(5): 577–84. PubMed Abstract\n\nBasch L, Glick-Schiller N, Szanton Blanc C: Nations Unbound: transnational projects, postcolonial predicaments and deterritorialized nation-states. Langhorne: Gordon and Breach. 1994. Reference Source\n\nÇaglar AS: Constraining metaphors and the transnationalisation of spaces in Berlin. J Ethn Migr Stud. 2001; 27(4): 601–13. Publisher Full Text\n\nForeign travel associated illness: a focus on those visiting friends and relatives; 2008 report. London: Health Protection Agency, 2008. Reference Source\n\nCobelens FG, van Deutekom H, Draayer-Jansen IW, et al.: Risk of infection with Mycobacterium tuberculosis in travellers to areas of high tuberculosis endemicity. Lancet. 2000; 356(9228): 461–5. PubMed Abstract | Publisher Full Text\n\nOrmerod LP, Green RM, Gray S: Are there still effects on Indian Subcontinent ethnic tuberculosis of return visits?: a longitudinal study 1978–97. J Infect. 2001; 43(2): 132–4. PubMed Abstract | Publisher Full Text\n\nTocque K, Bellis MA, Beeching NJ, et al.: A case-control study of lifestyle risk factors associated with tuberculosis in Liverpool, North-West England. Eur Respir J. 2001; 18(6): 959–64. PubMed Abstract | Publisher Full Text\n\nSingh H, Joshi M, Ormerod LP: A case control study in the Indian subcontinent ethnic population on the effect of return visits and the subsequent development of tuberculosis. J Infect. 2006; 52(6): 440–2. PubMed Abstract | Publisher Full Text\n\nRodrigues LC, Diwan VK, Wheeler JG: Protective effect of BCG against tuberculous meningitis and miliary tuberculosis: a meta-analysis. Int J Epidemiol. 1993; 22(6): 1154–8. PubMed Abstract | Publisher Full Text\n\nTrunz BB, Fine P, Dye C: Effect of BCG vaccination on childhood tuberculous meningitis and miliary tuberculosis worldwide: a meta-analysis and assessment of cost-effectiveness. Lancet. 2006; 367(9517): 1173–80. PubMed Abstract | Publisher Full Text\n\nFine PE: Variation in protection by BCG: implications of and for heterologous immunity. Lancet. 1995; 346(8986): 1339–45. PubMed Abstract\n\nNarayanan PR: Influence of sex, age & nontuberculous infection at intake on the efficacy of BCG: re-analysis of 15-year data from a double-blind randomized control trial in South India. Indian J Med Res. 2006; 123(2): 119–24. PubMed Abstract\n\nColditz GA, Brewer TF, Berkey CS, et al.: Efficacy of BCG vaccine in the prevention of tuberculosis. Meta-analysis of the published literature. JAMA. 1994; 271(9): 698–702. PubMed Abstract | Publisher Full Text\n\nMahomed H, Kibel M, Hawkridge T, et al.: The impact of a change in bacille Calmette-Guérin vaccine policy on tuberculosis incidence in children in Cape Town, South Africa. Pediatr Infect Dis J. 2006; 25(12): 1167–72. PubMed Abstract | Publisher Full Text\n\nMoyo S, Verver S, Mahomed H, et al.: Age-related tuberculosis incidence and severity in children under 5 years of age in Cape Town, South Africa. Int J Tuberc Lung Dis. 2010; 14(2): 149–54. PubMed Abstract\n\nPalmer CE, Long MW: Effects of infection with atypical mycobacteria on BCG vaccination and tuberculosis. Am Rev Respir Dis. 1966; 94(4): 553–68. PubMed Abstract\n\nWeiszfeiler JG, Karasseva V: Mixed mycobacterial infections. Rev Infect Dis. 1981; 3(5): 1081–3. PubMed Abstract | Publisher Full Text\n\nRook GA, Bahr GM, Stanford JL: The effect of two distinct forms of cell-mediated response to mycobacteria on the protective efficacy of BCG. Tubercle. 1981; 62(1): 63–8. PubMed Abstract | Publisher Full Text\n\nStanford JL, Shield MJ, Rook GA: How environmental mycobacteria may predetermine the protective efficacy of BCG. Tubercle. 1981; 62(1): 55–62. PubMed Abstract | Publisher Full Text\n\nFifteen year follow up of trial of BCG vaccines in south India for tuberculosis prevention. Tuberculosis Research Centre (ICMR), Chennai. Indian J Med Res. 1999; 110: 56–69. PubMed Abstract\n\nBlack GF, Fine PEM, Warndorff DK, et al.: Relationship between IFN-gamma and skin test responsiveness to Mycobacterium tuberculosis PPD in healthy, non-BCG-vaccinated young adults in Northern Malawi. Int J Tuberc Lung Dis. 2001; 5(7): 664–72. PubMed Abstract\n\nBrandt L, Feino Cunha J, Weinreich Olsen A, et al.: Failure of the Mycobacterium bovis BCG vaccine: some species of environmental mycobacteria block multiplication of BCG and induction of protective immunity to tuberculosis. Infect Immun. 2002; 70(2): 672–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHirsch A: Handbook of geographical and historical pathology. Vol. III. Diseases of organs and parts. London: New Syndenham Society, 1886. Reference Source\n\nAbel L, Fellay J, Haas DW, et al.: Genetics of human susceptibility to active and latent tuberculosis: present knowledge and future perspectives. Lancet Infect Dis. 2018; 18(3): e64–e75. PubMed Abstract\n\nKallman FJRD: Twin studies on the significance of genetic factors in tuberculosis. Am Rev Tuberc. 1942; 47: 549–74.\n\nComstock GW: Tuberculosis in twins: a re-analysis of the Prophit survey. Am Rev Respir Dis. 1978; 117(4): 621–4. PubMed Abstract\n\nThye T, Browne EN, Chinbuah MA, et al.: IL10 haplotype associated with tuberculin skin test response but not with pulmonary TB. PLoS One. 2009; 4(5): e5420. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZembrzuski VM, Basta PC, Callegari-Jacques SM, et al.: Cytokine genes are associated with tuberculin skin test response in a native Brazilian population. Tuberculosis (Edinb). 2010; 90(1): 44–9. PubMed Abstract | Publisher Full Text\n\nSveinbjornsson G, Gudbjartsson DF, Halldorsson BV, et al.: HLA class II sequence variants influence tuberculosis risk in populations of European ancestry. Nat Genet. 2016; 48(3): 318–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStein CM, Zalwango S, Malone LL, et al.: Genome scan of M. tuberculosis infection and disease in Ugandans. PLoS One. 2008; 3(12): e4094. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCobat A, Gallant CJ, Simkin L, et al.: Two loci control tuberculin skin test reactivity in an area hyperendemic for tuberculosis. J Exp Med. 2009; 206(12): 2583–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJabot-Hanin F, Cobat A, Feinberg J, et al.: Major Loci on Chromosomes 8q and 3q Control Interferon γ Production Triggered by Bacillus Calmette-Guerin and 6-kDa Early Secretory Antigen Target, Respectively, in Various Populations. J Infect Dis. 2016; 213(7): 1173–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDaya M, van der Merwe L, van Helden PD, et al.: The role of ancestry in TB susceptibility of an admixed South African population. Tuberculosis (Edinb). 2014; 94(4): 413–20. PubMed Abstract | Publisher Full Text\n\nDelgado JC, Baena A, Thim S, et al.: Ethnic-specific genetic associations with pulmonary tuberculosis. J Infect Dis. 2002; 186(10): 1463–8. PubMed Abstract | Publisher Full Text\n\nStead WW: Variation in vulnerability to tuberculosis in America today: random, or legacies of different ancestral epidemics? Int J Tuberc Lung Dis. 2001; 5(9): 807–14. PubMed Abstract\n\nCoussens AK, Wilkinson RJ, Nikolayevskyy V, et al.: Ethnic variation in inflammatory profile in tuberculosis. PLoS Pathog. 2013; 9(7): e1003468. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGagneux S: Host-pathogen coevolution in human tuberculosis. Philos Trans R Soc Lond B Biol Sci. 2012; 367(1590): 850–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith NH, Hewinson RG, Kremer K, et al.: Myths and misconceptions: the origin and evolution of Mycobacterium tuberculosis. Nat Rev Microbiol. 2009; 7(7): 537–44. PubMed Abstract | Publisher Full Text\n\nLux M: Perfect subjects: race, tuberculosis, and the Qu'Appelle BCG Vaccine Trial. Can Bull Med Hist. 1998; 15(2): 277–95. PubMed Abstract\n\nMacDonald N, Hébert PC, Stanbrook MB: Tuberculosis in Nunavut: a century of failure. CMAJ. 2011; 183(7): 741–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarnes I, Duda A, Pybus OG, et al.: Ancient urbanization predicts genetic resistance to tuberculosis. Evolution. 2011; 65(3): 842–8. PubMed Abstract | Publisher Full Text\n\nMartineau AR: Old wine in new bottles: vitamin D in the treatment and prevention of tuberculosis. Proc Nutr Soc. 2012; 71(1): 84–9. PubMed Abstract | Publisher Full Text\n\nTalat N, Perry S, Parsonnet J, et al.: Vitamin d deficiency and tuberculosis progression. Emerg Infect Dis. 2010; 16(5): 853–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuang SJ, Wang XH, Liu ZD, et al.: Vitamin D deficiency and the risk of tuberculosis: a meta-analysis. Drug Des Devel Ther. 2016; 11: 91–102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGibney KB, MacGregor L, Leder K, et al.: Vitamin D Deficiency Is Associated with Tuberculosis and Latent Tuberculosis Infection in Immigrants from Sub-Saharan Africa. Clin Infect Dis. 2008; 46(3): 443–6. PubMed Abstract | Publisher Full Text\n\nHo-Pham LT, Nguyen ND, Nguyen TT, et al.: Association between vitamin D insufficiency and tuberculosis in a vietnamese population. BMC Infect Dis. 2010; 10(1): 306. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNnoaham KE, Clarke A: Low serum vitamin D levels and tuberculosis: a systematic review and meta-analysis. Int J Epidemiol. 2008; 37(1): 113–9. PubMed Abstract | Publisher Full Text\n\nChocano-Bedoya P, Ronnenberg AG: Vitamin D and tuberculosis. Nutr Rev. 2009; 67(5): 289–93. PubMed Abstract | Publisher Full Text\n\nSita-Lumsden A, Lapthorn G, Swaminathan R, et al.: Reactivation of tuberculosis and vitamin D deficiency: the contribution of diet and exposure to sunlight. Thorax. 2007; 62(11): 1003–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThieden E, Philipsen PA, Heydenreich J, et al.: Vitamin D level in summer and winter related to measured UVR exposure and behavior. Photochem Photobiol. 2009; 85(6): 1480–4. PubMed Abstract | Publisher Full Text\n\nHolick MF: Vitamin D deficiency. N Engl J Med. 2007; 357(3): 266–81. PubMed Abstract | Publisher Full Text\n\nEggemoen AR, Knutsen KV, Dalen I, et al.: Vitamin D status in recently arrived immigrants from Africa and Asia: a cross-sectional study from Norway of children, adolescents and adults. BMJ Open. 2013; 3(10): e003293. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrimary vitamin D deficiency in adults. Drug Ther Bull. 2006; 44(4): 25–9. PubMed Abstract | Publisher Full Text\n\nMartin CA, Gowda U, Renzaho AM: The prevalence of vitamin D deficiency among dark-skinned populations according to their stage of migration and region of birth: A meta-analysis. Nutrition. 2016; 32(1): 21–32. PubMed Abstract | Publisher Full Text\n\nLawson M, Thomas M, Hardiman A: Dietary and lifestyle factors affecting plasma vitamin D levels in Asian children living in England. Eur J Clin Nutr. 1999; 53(4): 268–72. PubMed Abstract | Publisher Full Text\n\nZhang R, Naughton DP: Vitamin D in health and disease: current perspectives. Nutr J. 2010; 9: 65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChan TY: Vitamin D deficiency and susceptibility to tuberculosis. Calcif Tissue Int. 2000; 66(6): 476–8. PubMed Abstract\n\nFinch PJ, Millard FJ, Maxwell JD: Risk of tuberculosis in immigrant Asians: culturally acquired immunodeficiency? Thorax. 1991; 46(1): 1–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBonilla C, Ness AR, Wills AK, et al.: Skin pigmentation, sun exposure and vitamin D levels in children of the Avon Longitudinal Study of Parents and Children. BMC Public Health. 2014; 14: 597. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHunt SP, O'Riordan JL, Windo J, et al.: Vitamin D status in different subgroups of British Asians. Br Med J. 1976; 2(6048): 1351–4. PubMed Abstract | Free Full Text\n\nWilkinson RJ, Llewelyn M, Toossi Z, et al.: Influence of vitamin D deficiency and vitamin D receptor polymorphisms on tuberculosis among Gujarati Asians in west London: a case-control study. Lancet. 2000; 355(9204): 618–21. PubMed Abstract | Publisher Full Text\n\nFarrow S: Allelic variation and the vitamin D receptor. Lancet. 1994; 343(8908): 1242. PubMed Abstract | Publisher Full Text\n\nChen C, Liu Q, Zhu L, et al.: Vitamin D receptor gene polymorphisms on the risk of tuberculosis, a meta-analysis of 29 case-control studies. PLoS One. 2013; 8(12): e83843. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSutaria N, Liu CT, Chen TC: Vitamin D Status, Receptor Gene Polymorphisms, and Supplementation on Tuberculosis: A Systematic Review of Case-Control Studies and Randomized Controlled Trials. J Clin Transl Endocrinol. 2014; 1(4): 151–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZmuda JM, Cauley JA, Ferrell RE: Molecular epidemiology of vitamin D receptor gene variants. Epidemiol Rev. 2000; 22(2): 203–17. PubMed Abstract\n\nAndraos C, Koorsen G, Knight JC, et al.: Vitamin D receptor gene methylation is associated with ethnicity, tuberculosis, and TaqI polymorphism. Hum Immunol. 2011; 72(3): 262–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStevenson CR, Forouhi NG, Roglic G, et al.: Diabetes and tuberculosis: the impact of the diabetes epidemic on tuberculosis incidence. BMC Public Health. 2007; 7: 234. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoot HF: The association of diabetes and tuberculosis. N Engl J Med. 1934; 210(8): 127–47. Publisher Full Text\n\nJeon CY, Murray MB: Diabetes mellitus increases the risk of active tuberculosis: a systematic review of 13 observational studies. PLoS Med. 2008; 5(7): e152. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartens GW, Arikan MC, Lee J, et al.: Tuberculosis susceptibility of diabetic mice. Am J Respir Cell Mol Biol. 2007; 37(5): 518–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOgbera AO, Kapur A, Abdur-Razzaq H, et al.: Clinical profile of diabetes mellitus in tuberculosis. BMJ Open Diabetes Res Care. 2015; 3(1): e000112. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStevenson CR, Critchley JA, Forouhi NG, et al.: Diabetes and the risk of tuberculosis: a neglected threat to public health? Chronic Illn. 2007; 3(3): 228–45. PubMed Abstract | Publisher Full Text\n\nThomas C, Nightingale CM, Donin AS, et al.: Socio-economic position and type 2 diabetes risk factors: patterns in UK children of South Asian, black African-Caribbean and white European origin. PLoS One. 2012; 7(3): e32619. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSproston K, Mindell J: Health Survey for England 2004: Volume 1: The health of ethnic minority groups. Leeds: The Information Centre. 2006; 127–47. Reference Source\n\nHolman N, Forouhi NG, Goyder E, et al.: The Association of Public Health Observatories (APHO) Diabetes Prevalence Model: estimates of total diabetes prevalence for England, 2010–2030. Diabet Med. 2011; 28(5): 575–82. PubMed Abstract | Publisher Full Text\n\nBhopal R, Hayes L, White M, et al.: Ethnic and socio-economic inequalities in coronary heart disease, diabetes and risk factors in Europeans and South Asians. J Public Health Med. 2002; 24(2): 95–105. PubMed Abstract | Publisher Full Text\n\nBhopal RS: A four-stage model explaining the higher risk of Type 2 diabetes mellitus in South Asians compared with European populations. Diabet Med. 2013; 30(1): 35–42. PubMed Abstract | Publisher Full Text\n\nZumla A, Malon P, Henderson J, et al.: Impact of HIV infection on tuberculosis. Postgrad Med J. 2000; 76(895): 259–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChretien J: Tuberculosis and HIV. The cursed duet. Bull Int Union Tuberc Lung Dis. 1990; 65(1): 25–8. PubMed Abstract\n\nGetahun H, Gunneberg C, Granich R, et al.: HIV infection-associated tuberculosis: the epidemiology and the response. Clin Infect Dis. 2010; 50 Suppl 3: S201–7. PubMed Abstract | Publisher Full Text\n\nSonnenberg P, Glynn JR, Fielding K, et al.: How soon after infection with HIV does the risk of tuberculosis start to increase? A retrospective cohort study in South African gold miners. J Infect Dis. 2005; 191(2): 150–8. PubMed Abstract | Publisher Full Text\n\nCorbett EL, Watt CJ, Walker N, et al.: The growing burden of tuberculosis: global trends and interactions with the HIV epidemic. Arch Intern Med. 2003; 163(9): 1009–21. PubMed Abstract | Publisher Full Text\n\nMcShane H: Co-infection with HIV and TB: double trouble. Int J STD AIDS. 2005; 16(2): 95–100; quiz 101. PubMed Abstract | Publisher Full Text\n\nZhang M, Gong J, Iyer DV, et al.: T cell cytokine responses in persons with tuberculosis and human immunodeficiency virus infection. J Clin Invest. 1994; 94(6): 2435–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBell LCK, Noursadeghi M: Pathogenesis of HIV-1 and Mycobacterium tuberculosis co-infection. Nat Rev Microbiol. 2018; 16(2): 80–90. PubMed Abstract | Publisher Full Text\n\nTavares AM, Fronteira I, Couto I, et al.: HIV and tuberculosis co-infection among migrants in Europe: A systematic review on the prevalence, incidence and mortality. PLoS One. 2017; 12(9): e0185526. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWHO: World Health Organisation: Global Health Observatory Data Repository. 2016. Reference Source\n\nPradhan RP, Katz LA, Nidus BD, et al.: Tuberculosis in dialyzed patients. JAMA. 1974; 229(7): 798–800. PubMed Abstract | Publisher Full Text\n\nHu HY, Wu CY, Huang N, et al.: Increased risk of tuberculosis in patients with end-stage renal disease: a population-based cohort study in Taiwan, a country of high incidence of end-stage renal disease. Epidemiol Infect. 2014; 142(1): 191–9. PubMed Abstract | Publisher Full Text\n\nDobler CC, McDonald SP, Marks GB: Risk of Tuberculosis in Dialysis Patients: A Nationwide Cohort Study. PLoS One. 2011; 6(12): e29563. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAl-Efraij K, Mota L, Lunny C, et al.: Risk of active tuberculosis in chronic kidney disease: a systematic review and meta-analysis. Int J Tuberc Lung Dis. 2015; 19(12): 1493–9. PubMed Abstract | Publisher Full Text\n\nRomanowski K, Clark EG, Levin A, et al.: Tuberculosis and chronic kidney disease: an emerging global syndemic. Kidney Int. 2016; 90(1): 34–40. PubMed Abstract | Publisher Full Text\n\nKato S, Chmielewski M, Honda H, et al.: Aspects of immune dysfunction in end-stage renal disease. Clin J Am Soc Nephrol. 2008; 3(5): 1526–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNational Collaborating Centre for Chronic Conditions (UK), Centre for Clinical Practice at NICE (UK), National Institute for Health and Clinical Excellence: Guidance: Tuberculosis: Clinical Diagnosis and Management of Tuberculosis, and Measures for Its Prevention and Control. London: National Institute for Health and Clinical Excellence (UK). Royal College of Physicians of London. Updated text, Copyright (c) 2011, National Institute for Health and Clinical Excellence. 2011. PubMed Abstract\n\nLightstone L: Preventing renal disease: the ethnic challenge in the United Kingdom. Kidney Int Suppl. 2003; 63(83): S135–8. PubMed Abstract | Publisher Full Text\n\nBall S, Lloyd J, Cairns T, et al.: Why is there so much end-stage renal failure of undetermined cause in UK Indo-Asians? QJM. 2001; 94(4): 187–93. PubMed Abstract | Publisher Full Text\n\nOstermann M, Palchaudhuri P, Riding A, et al.: Incidence of tuberculosis is high in chronic kidney disease patients in South East England and drug resistance common. Ren Fail. 2016; 38(2): 256–61. PubMed Abstract | Publisher Full Text\n\nNew JP, Middleton RJ, Klebe B, et al.: Assessing the prevalence, monitoring and management of chronic kidney disease in patients with diabetes compared with those without diabetes in general practice. Diabet Med. 2007; 24(4): 364–9. PubMed Abstract | Publisher Full Text\n\nKhan S: Vitamin D deficiency and secondary hyperparathyroidism among patients with chronic kidney disease. Am J Med Sci. 2007; 333(4): 201–7. PubMed Abstract | Publisher Full Text\n\nWinston JA: HIV and CKD epidemiology. Adv Chronic Kidney Dis. 2010; 17(1): 19–25. PubMed Abstract | Publisher Full Text\n\nHossain MP, Goyder EC, Rigby JE, et al.: CKD and poverty: a growing global challenge. Am J Kidney Dis. 2009; 53(1): 166–74. PubMed Abstract | Publisher Full Text\n\nDubos R, Dubos J: The White Plague: Tuberculosis, Man, and Society. New Brunswick, N.J.: Rutgers University Press, 1996. Reference Source\n\nWeiss KB, Addington WW: Tuberculosis: poverty's penalty. Am J Respir Crit Care Med. 1998; 157(4 Pt 1): 1011. PubMed Abstract | Publisher Full Text\n\nLönnroth K, Jaramillo E, Williams BG, et al.: Drivers of tuberculosis epidemics: the role of risk factors and social determinants. Soc Sci Med. 2009; 68(12): 2240–6. PubMed Abstract | Publisher Full Text\n\nMcKeown T: The Modern Rise of Population. London: Edward Arnold. 1976. Reference Source\n\nSzreter S: The Importance of Social Intervention in Britain's Mortality Decline c.1850–1914: a Re-interpretation of the Role of Public Health. Soc Hist Med. 1988; 1(1): 1–38. Publisher Full Text\n\nKing NB: Immigration, Race and Geographies of Difference in the Tuberculosis Pandemic. In: M. Gandy and A. Zumla, A., eds., The Return of the White Plague: Global Poverty and the ‘New’ Tuberculosis. London: Verso, 2003. Reference Source\n\nBurki T: Tackling tuberculosis in London's homeless population. Lancet. 2010; 376(9758): 2055–6. PubMed Abstract | Publisher Full Text\n\nMangtani P, Jolley DJ, Watson JM, et al.: Socioeconomic deprivation and notification rates for tuberculosis in London during 1982–91. BMJ. 1995; 310(6985): 963–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrieger N: Discrimination and Health. In: Berkman LF, Kawachi I, eds., Social Epidemiology. Press OU editor. New York. 2000. Reference Source\n\n2011 Census analysis: Ethnicity and the Labour Market, England and Wales. Office for National Statistics. 2014. Reference Source\n\nTala E: Migration, ethnic minorities and tuberculosis. Eur Respir J. 1989; 2(6): 492–3. PubMed Abstract\n\nPopulation of the UK by Country of Birth and Nationality: 2015. Office for National Statistics. 2016. Reference Source\n\nStatutory Homelessness: October to December Quarter 2015. Department for Communities and Local Government. 2016. Reference Source\n\nOvercrowding and Under-Occupation by Ethnic Group, 2011. Office for National Statistics. 2014. Reference Source\n\nFarmer P: Infections and Inequalities: The Modern Plagues. California: University of California Press, 1999. Reference Source\n\nSinger M: Farewell to adaptationism: unnatural selection and the politics of biology. Med Anthropol Q. 1996; 10(4): 496–515. PubMed Abstract | Publisher Full Text\n\nMason PH, Roy A, Spillane J, et al.: Social, Historical and Cultural Dimensions of Tuberculosis. J Biosoc Sci. 2016; 48(2): 206–32. PubMed Abstract | Publisher Full Text\n\nCrofts JP, Gelb D, Andrews N, et al.: Investigating tuberculosis trends in England. Public Health. 2008; 122(12): 1302–10. PubMed Abstract | Publisher Full Text\n\nParslow R, El-Shimy NA, Cundall DB, et al.: Tuberculosis, deprivation, and ethnicity in Leeds, UK: 1982–1997. Arch Dis Child. 2001; 84(2): 109–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTocque K, Regan M, Remmington T, et al.: Social factors associated with increases in tuberculosis notifications. Eur Respir J. 1999; 13(3): 541–5. PubMed Abstract | Publisher Full Text\n\nOffer C, Lee A, Humphreys C: Tuberculosis in South Asian communities in the UK: a systematic review of the literature. J Public Health (Oxf). 2016; 38(2): 250–7. PubMed Abstract | Publisher Full Text\n\nBeckhurst C, Evans S, MacFarlane AF, et al.: Factors influencing the distribution of tuberculosis cases in an inner London borough. Commun Dis Public Health. 2000; 3(1): 28–31. PubMed Abstract\n\nHo MJ: Migratory journeys and tuberculosis risk. Med Anthropol Q. 2003; 17(4): 442–58. PubMed Abstract | Publisher Full Text\n\nAbarca Tomas B, Pell C, Bueno Cavanillas A, et al.: Tuberculosis in migrant populations. A systematic review of the qualitative literature. PLoS One. 2013; 8(12): e82440. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRaphaely N: Understanding the health needs of migrants in the South East Region. London: Health Protection Agency and Department of Health, 2010. Reference Source\n\nSamers M: Migration. Oxford: Routledge, 2010. Reference Source\n\nBrugha T, Jenkins R, Bebbington P, et al.: Risk factors and the prevalence of neurosis and psychosis in ethnic groups in Great Britain. Soc Psychiatry Psychiatr Epidemiol. 2004; 39(12): 939–46. PubMed Abstract | Publisher Full Text\n\nThe Mental and Emotional Wellbeing of Africans in the UK: A research and discussion paper. African Health Policy Network, 2013. Reference Source\n\nPrince M, Patel V, Saxena S, et al.: No health without mental health. Lancet. 2007; 370(9590): 859–77. PubMed Abstract | Publisher Full Text\n\nWieland ML, Weis JA, Yawn BP, et al.: Perceptions of tuberculosis among immigrants and refugees at an adult education center: a community-based participatory research approach. J Immigr Minor Health. 2012; 14(1): 14–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGerrish K, Naisby A, Ismail M: The meaning and consequences of tuberculosis among Somali people in the United Kingdom. J Adv Nurs. 2012; 68(12): 2654–63. PubMed Abstract | Publisher Full Text\n\nJohnson A: Beliefs and barriers related to understanding TB amongst vulnerable groups in South East London. 2006. Reference Source\n\nNnoaham KE, Pool R, Bothamley G, et al.: Perceptions and experiences of tuberculosis among African patients attending a tuberculosis clinic in London. Int J Tuberc Lung Dis. 2006; 10(9): 1013–7. PubMed Abstract\n\nPoss JE: The meanings of tuberculosis for Mexican migrant farmworkers in the United States. Soc Sci Med. 1998; 47(2): 195–202. PubMed Abstract | Publisher Full Text\n\nCoreil J, Lauzardo M, Heurtelou M: Cultural feasibility assessment of tuberculosis prevention among persons of Haitian origin in South Florida. J Immigr Health. 2004; 6(2): 63–9. PubMed Abstract | Publisher Full Text\n\nKulane A, Ahlberg BM, Berggren I: \"It is more than the issue of taking tablets\": the interplay between migration policies and TB control in Sweden. Health Policy. 2010; 97(1): 26–31. PubMed Abstract | Publisher Full Text\n\nFestenstein FaG JM: Tuberculosis in Ethnic Minority Populations in Industrialised Countries. In: JDH Porter and JM Grange, eds., Tuberculosis: An Interdisciplinary Perspective. London: Imperial College Press, 2010.\n\nKraut AM: Silent Travellers: Germs, Genes and the 'Immigrant Menace'. Baltimore: John Hopkins University Press, 1994. Reference Source\n\nGoffman E: Stigma: Notes on the Management of Spoiled Identity. London: Penguin Books. 1990. Reference Source\n\nKelly P: Isolation and Stigma: The Experience of Patients With Active Tuberculosis. J Community Health Nurs. 1999; 16(4): 233–41. PubMed Abstract | Publisher Full Text\n\nBaral SC, Karki DK, Newell JN: Causes of stigma and discrimination associated with tuberculosis in Nepal: a qualitative study. BMC Public Health. 2007; 7: 211. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCourtwright A, Turner AN: Tuberculosis and stigmatization: pathways and interventions. Public Health Rep. 2010; 125(Suppl 4): 34–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSterne JA, Rodrigues LC, Guedes IN: Does the efficacy of BCG decline with time since vaccination? Int J Tuberc Lung Dis. 1998; 2(3): 200–7. PubMed Abstract\n\nMcBride D: From Tuberculosis to AIDS: Epidemics Among Urban Blacks Since 1900. Albany: State University of New York. 1991. Reference Source\n\nBollini P, Siem H: No real progress towards equity: health of migrants and ethnic minorities on the eve of the year 2000. Soc Sci Med. 1995; 41(6): 819–28. PubMed Abstract | Publisher Full Text\n\nStagg HR, Jones J, Bickler G, et al.: Poor uptake of primary healthcare registration among recent entrants to the UK: a retrospective cohort study. BMJ Open. 2012; 2(4): pii: e001453. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHart JT: The Inverse Care Law. Lancet. 1971; 1(7696): 405–12. PubMed Abstract | Publisher Full Text\n\nBender A, Andrews G, Peter E: Displacement and tuberculosis: recognition in nursing care. Health Place. 2010; 16(6): 1069–76. PubMed Abstract | Publisher Full Text\n\nIto KL: Health culture and the clinical encounter: Vietnamese refugees' responses to preventive drug treatment of inactive tuberculosis. Med Anthropol Q. 1999; 13(3): 338–64. PubMed Abstract | Publisher Full Text\n\nSumartojo E: When tuberculosis treatment fails. A social behavioral account of patient adherence. Am Rev Respir Dis. 1993; 147(5): 1311–20. PubMed Abstract | Publisher Full Text\n\nJoseph HA, Waldman K, Rawls C, et al.: TB perspectives among a sample of Mexicans in the United States: results from an ethnographic study. J Immigr Minor Health. 2008; 10(2): 177–85. PubMed Abstract | Publisher Full Text\n\nHo MJ: Sociocultural aspects of tuberculosis: a literature review and a case study of immigrant tuberculosis. Soc Sci Med. 2004; 59(4): 753–62. PubMed Abstract | Publisher Full Text\n\nTaylor K: Asylum seekers, refugees, and the politics of access to health care: a UK perspective. Br J Gen Pract. 2009; 59(567): 765–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReeves M, de Wildt G, Murshali H, et al.: Access to health care for people seeking asylum in the UK. Br J Gen Pract. 2006; 56(525): 306–8. PubMed Abstract | Free Full Text\n\nWagner KS, Lawrence J, Anderson L, et al.: Migrant health and infectious diseases in the UK: findings from the last 10 years of surveillance. J Public Health (Oxf). 2014; 36(1): 28–35. PubMed Abstract | Publisher Full Text\n\nStory A, Murad S, Roberts W, et al.: Tuberculosis in London: the importance of homelessness, problem drug use and prison. Thorax. 2007; 62(8): 667–71. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "33177",
"date": "27 Apr 2018",
"name": "Jessica L. Potter",
"expertise": [
"Reviewer Expertise Healthcare access",
"tuberculosis",
"migration",
"qualitative research",
"social science"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an important review and good addition to the literature in that it combines both the biomedical and the social determinants of TB risk amongst populations that experience a disproportionate burden of disease in the UK. Great concluding section in particular. \"Given the complex association between ethnicity and socio-economic status, it is hard to disentangle the extent to which socio-economic disadvantage influences TB incidence in migrant and ethnic minority populations\" - I suggest this needs to be more explicit from the start and particularly as you risk, perhaps (my reading of it) conflating the migrant and ethnic-minority experience thus contributing to an idea that non-white = non British.\nI note the reference to Nancy Krieger's work on the embodiment of risk across a life course. Considering TB through this ecosocial perspective allows an analysis of the interplay between the biomedical and the social. I wonder whether the article would flow better and provide a more critical reading if situated within this framework and, in particular, framing the literature from a 'distal' or macro (social determinants) perspective down to more proximal factors might provide a better reading. In terms of considerations of factors that contribute to risk I think that placing discussions of genetic susceptability at the start somewhat undermines the well founded concerns highlighted in the social determinants section - that our understanding of TB and research investment heavily focuses on the biomedical rather than the social.\nThe section about genetic susceptibility is perhaps a little uncritical and positions the association between ethnicity and TB risk at a genetic level as taken for granted. For a critical reading of Cummins research in relation to race and racism, for example, chapters 1 and 2 (the rise and fall of race) in Discovering TB by Christian McMillen.\n\nI also note you use the term 'colored' to describe participants - I realise this is in the title of that study. As this term has racist origins I would suggest either adding [sic!] after it or altering the term to more accurately describe the population the research was conducted amongst (the former is probably easiest!).\n\nA few other points: - 'Hindu-Asians are more frequently vegetarian' - than who? - 'Globalisation and capital flows' are drivers of migration - the reference is a report on migration and cities...is there a better reference? - 2015 PHE data provided - can this be updated? - The section on health-seeking states \"The disease has been variously erroneously attributed to climate conditions....\" - Please can you delete 'erroneously': You previously discuss vitamin D and migration - both related to 'climate conditions' as possible risk factors for TB (I realise this is not necessarily what the participants were getting at in their interviews but still think best to remove it) My other reason for removing it is that it undermines what is ''truth' as experienced by patients.\n\n- 'Non-compliance' - I tend to use adherance as per NICE guidelines https://www.nice.org.uk/guidance/cg76 - \"Farmer criticises anthropological investigations for conflating structural violence with cultural difference\" - The report the 'snowy white peaks of the NHS' and the McPherson report in the wake of the Stephen Lawrence inquiry talked about institutional racism within the NHS and I wonder whether this would be useful to mention as you talk about cultural barriers in relation to language, knowledge of health systems etc but, although you mention 'racial' barriers it is within the context of talking about migrants, rather than British-born BAME communities whose challenges in accessing care you don't specifically address.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
},
{
"id": "34098",
"date": "09 Jul 2018",
"name": "Manish Pareek",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for asking me to review this manuscript which reviews the factors influencing the higher incidence of TB in migrants and ethnic minorities.\nIntroduction\nThe authors should clarify that the symptoms of active disease both site-specific and systemic and list site-specific symptoms.\n\nEpidemiology\nAlthough a mainly UK based paper, is there evidence from other high-income countries relating to transmission in migrant communities?\n\nDifferential exposure\nWhilst I agree travel is potentially important as a risk factor for acquiring TB infection, the evidence at present is weak. I think the authors need to nuance this section and clarify that the data is weak and further work is required. In particular type of travel is important. Migrants working in healthcare overseas as volunteers etc will be at higher risk.\n\nVitamin D\nThe impact of vitamin D is difficult to tease out and I think the authors should cite the work of Pareek and colleagues which suggests that vitamin D levels (i.e. deficiency) are associated with EPTB.\n\nImpact of strain\nI would like more discussion on the host-pathogen interaction and impact of strain type on TB phenotype. See Pareek et al's work in this area.\n\nDiabetes\nThe relative risk of TB in DM patients seems lower in the UK. Have they seen the work from David Moore's group on the CPRD cohort?\n\nHIV\nWhilst I agree that the absolute proportion of TB-HIV cases is low it should be borne in mind that incidence in this group is relatively high and they are important group to prevent TB infection and progression in.\n\nFirst generation vs. second generation\nI would have liked some discussion in the manuscript relating to why second generation individuals (i.e. UK born but with parents born overseas) are at increased risk of TB. This is often neglected but needs to be evaluated.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-461
|
https://f1000research.com/articles/7-1312/v1
|
17 Aug 18
|
{
"type": "Method Article",
"title": "Mathematical model and analysis of hepatitis B virus transmission dynamics",
"authors": [
"Blessing O. Emerenini",
"Simeon C. Inyama",
"Simeon C. Inyama"
],
"abstract": "Hepatitis B is a liver infection induced by the hepatitis B virus (HBV). In this paper, the dynamics involved in the transmission of HBV is mathematically formulated with considerations of different populations of individuals. The role of HBV vaccination of new born babies and the treatment of infected individuals in controlling the transmission are factored into the model. The model in this study is based on the standard SEIR model.",
"keywords": [
"Hepatitis B virus (HBV)",
"Disease-free equilibrium",
"Endermic equilibrium state",
"Stability analysis"
],
"content": "Introduction\n\nHepatitis B (HB) is a potentially life-threatening liver infection caused by the hepatitis B virus (HBV), which is a DNA virus classified in the virus family of Hepadnaviridae. The World Health Organization (WHO) in 1 reported that more than 0.25 billion people are living with HBV infection, most of which resulted in several deaths.\n\nA vaccine against HB has been available since 1982, nevertheless there is still an increase in its transmission and spread. Key facts from 1 reveal that HBV can survive outside the human body for at least 7 days, and during this period HBV can still cause infection if it enters any unimmunized human body. Most HB carriers are asymptomatic during the acute infection phase, nonetheless some people experience acute illness that can last for several days with variations in the progression.\n\nThe use of mathematical models in scientific research has improved our understanding of contributing factors. Mathematical model of HBV has ranged from simple models2,3 to more complex models involving the contributions of controls (e.g. vaccines)4, and analysis of the impact of immigrant5.\n\nMotivated by other HB studies, we use an infectious disease model to understand the impact of HB vaccination and treatment on the dynamics of HBV transmission and prevalence using an SEIR format.\n\n\nMethods\n\nA variety of mathematical models exist, such as the SIR, SIS, SIRS, and their variations; where S=Susceptible class, I=Infective class, and R=Recovered class. The model used in this study takes the form of SEIR based on ordinary differential equations, which shall be solved to obtain the disease-free equilibrium (DFE) state.\n\nTable 1 lists the parameters used. Figure 1 shows a schematic presentation of the model.\n\nWe formulate the HB transmission model as follows:\n\ndM(t)dt=cP−ϕΜ(t)−βM(t) (1)\n\ndS(t)dt=(1−c)P+ϕM(t)+πR(t)−(kI(t)+β)S(t) (2)\n\ndL(t)dt=kS(t)I(t)−qL(t)−μL(t)−βL(t) (3)\n\ndI(t)dt=μL(t)−ψI(t)−ηI(t)−βI(t) (4)\n\ndR(t)dt=qL(t)−ψI(t)−πR(t)−βR(t) (5)\n\nN(t)=M(t)+S(t)+L(t)+I(t)+R(t) (6)\n\nwhere M = immunized individuals, S = susceptible, L = latently infected/exposed, I = infectious individuals, and R = recovered.\n\nLet E(M, S, L, I, R) be the equilibrium point of the system described by (1)–(6), At the equilibrium state, we have\n\n\n\ni.e.,\n\ncP – 𝜙M – βM = 0 (7)\n\ncP – (ϕ + β)M = 0 (8)\n\n(1 – c)P + ϕM + πR – kSI – βS = 0 (9)\n\n(1 – c)P + ϕM + πR – (kI + β)S = 0 (10)\n\nkSI – qL – μL – βL = 0 (11)\n\nkSI – (q + μ + β)L = 0 (12)\n\nμL – 𝜓I – ηI – βI = 0 (13)\n\nμL – (𝜓 + η + β)I = 0 (14)\n\nqL – 𝜓I – πR – βR = 0 (15)\n\nqL + 𝜓I – (π + β)R = 0 (16)\n\nIn order to obtain the DFE state we solve equation (7)–equation (15) simultaneously.\n\nLet Eo (Mo, So, Lo, Io, Ro) be TES of (1)–(6) of the model, ∃ no TES since the population cannot be extinct, so long as new babies are born into the population (i.e. cP ≠ 0 and (1 − c)P ≠ 0).\n\nThat is, Eo (Mo, So, Lo, Io, Ro) ≠ (0, 0, 0, 0, 0)\n\nDFE state is the state of total eradication of disease. Let Eo (Mo, So, Lo, Io, Ro) be the DFE state. Suppose, both I and L must be zero. That is, for DFE state:\n\nIo = Lo = 0 (17)\n\nSubstituting I − L = 0 into equation (7)–equation (11) and solving simultaneously we have:\n\nFrom Equation (7):\n\ncP − (ϕ + β)M = 0\n\nMo=cPϕ+β(18)\n\nFrom Equation (9)\n\n(1−c)P+ϕcPϕ+β+πR−βS=0(19)\n\nFrom Equation (11)\n\nqL + 𝜓I − (π + β)R = 0\n\n⇒ (π + β)R = 0(since L = I = 0) (20)\n\n⇒ Either (π + β) = 0 Or R = 0 (21)\n\nSince π and β are positive constants, (π + β) ≠ 0.\n\nTherefore, Ro = 0.\n\nIf R = 0, Equation (9) becomes\n\n\n\n\n\nor\n\nSo=ϕ+β−cβ)Pβ(ϕ+β)(22)\n\nTherefore the DFE state of the model is\n\n\n\nTo determine the stability of the DFE state Eo, we examine the behavior of the model population near this equilibrium solution. Here, we determine condition(s) that must be met if the disease is to be totally eradicated.\n\nRecall that the system of equations in this model at equilibrium state is:\n\ncP – (ϕ + β)M = 0\n\n(1 – c)P + ϕM + πR – (kI + β)S = 0\n\nkSI – (q + μ + β)L = 0\n\nμL – (𝜓 + β + η)I = 0 (23)\n\nqL + 𝜓I – (π + β)R = 0\n\nWe now linearize the system of equations to get the Jacobian matrix J.\n\nJ=[ω1ϕ0000ω2kΙο0000ω3μq0−kSokSoω4ψ0π00ω5](24)\n\nwhere\n\nω1 = −(ϕ + β), ω2 = −(kIo + β), ω3 = −(q + μ + β), ω4 = −(𝜓 + β + η), ω5 = −(π + β)\n\nAt the disease-free equilibrium, Eo (Mo, So, Lo, Io, Ro), the Jacobian Matrix becomes\n\nJ=[ω6ϕ0000−βkΙο0000ω8μq0ω7ω9ω10ψ0π00ω11](25)\n\nwhere\n\nω6 = −(𝜙 + β), ω7 = −k(ϕ+β−cβ)Pβ(ϕ+β),ω8= −(q + μ + β), ω9 = k(ϕ+β−cβ)Pβ(ϕ+β),ω10 = −(𝜓 + β + η), ω11 = (q + μ + β + λ) > −(π + β)\n\nThe characteristic equation |Jo − Iλ| = 0 is obtained from the Jacobian determinant with the Eigen values λi (i = 1, 2, 3, 4, 5)\n\n= (λ2 + (ϕ + 2β)λ + (ϕβ + β2))(– π – β – λ)[X] = 0 (26)\n\nwhere\n\n\n\nFrom Equation (26), either\n\n(λ2 + (ϕ + 2β)λ + (ϕβ + β2))(– π – β – λ) = 0 (27)\n\nor\n\n[−(q+μ+β)−λμ−k(ϕ+β−cβ)Pβ(ϕ+β)−(ψ+β+η)−λ]=0(28)\n\nFrom Equation (27), we deduce\n\nλ1 = –(π + β) (29)\n\nλ2 = –β (30)\n\nand\n\nλ3 = –(ϕ + β) (31)\n\nLet\n\n\n\nFor the DFE to be asymptotically stable, trace(A) < 0 and det A > 0.\n\n\n\nAnd the trace of A is\n\nTrace(A) = –(q + μ + β + λ) – (𝜓 + β + η + λ)\n\nObviously, trace(A) < 0 since all the parameters q, t, β, 𝜓, β and η are positive.\n\nFor the determinant of A to be positive, we must have\n\n\n\nor\n\n\n\nFrom equation (29)–equation (31), λ1, λ2, λ3 of (25) all have negative real parts. We now establish the necessary and sufficient conditions for the remaining two Eigen values of (25) to have negative real part. The remaining two Eigen values of equation (25) will have negative real part if and only if det A > 0. i.e.\n\n\n\nThe Routh-Hurwitz theorem states that the equilibrium state will be asymptotically stable if and only if all the Eigen values of the characteristic equation |J − Iλ| = 0 have negative real part. Using this theorem we see that the DFE of this model will be asymptotically stable if and only if\n\n\n\nor\n\n\n\nThe inequality (32) gives the condition, necessary and sufficient for the DFE state of the model to be stable (asymptotically). This means that the product of total contraction and total breakdown of latent class given by kμ((ϕ+β−cβ)Pβ(ϕ+β)) must be less than the total removal rate from both latent and infectious classes given by (q + µ + β + λ)(𝜓 + β + η).\n\nAlternatively, the inequality (32) can also be expressed as\n\n\n\nThe inequality (33) also gives the condition necessary and sufficient for the stability of DFE state, thus sum of the rate of recovery of latently infected people, the rate at which latently infected individuals progress to active infection and the rate of natural death of individuals (in the population, i.e. total removal rate from the latent class) must have a lower bound given by\n\n\n\n\nConclusion\n\nPresented in this paper is a mathematical model of the role of vaccination and treatment on HB transmission dynamics. The proportion dynamics of the classes is described using five differential equations. We conclude that the trivial equilibrium state Eo(Mo, So, Lo, Io, Ro) is unstable; this is the state where there is no individual in the population. The DFE state, Eo(Mo, So, Lo, Io, Ro), was determined and its stability analysed using Routh-Hurwitz theorem.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nWorld Health Organization: Hepatitis b.fact sheet.World Health Organization, Revised, July 2017.\n\nHalfmann P, Kim JH, Ebihara H, et al.: Generation of biologically contained ebola viruses. Proc Natl Acad Sci U S A. 2008; 105(4): 1129–1133. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnderson RM, May RM: Infectious diseases of humans: dynamics and control. 1992; 28. Reference Source\n\nPang J, Cui JA, Zhou X: Dynamical behavior of a hepatitis b virus transmission model with vaccination. J Theor Biol. 2010; 265(4): 572–578. PubMed Abstract | Publisher Full Text\n\nKhan MA, Islam S, Arif M, et al.: Transmission model of hepatitis b virus with the migration effect. Biomed Res Int. 2013; 2013: 150681. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "52026",
"date": "25 Sep 2019",
"name": "Ramsès Djidjou Demasse",
"expertise": [
"Reviewer Expertise Differential Equations",
"Dynamical Systems",
"and Mathematical Biology."
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSummary. Authors proposed a compartmental SEIR model for the spread of Hepatitis B Virus (HBV). Attention is given to formulated with considerations of different populations of individuals the role of HBV vaccination of new born babies and the treatment of infected individuals. The model analysis mainly concerns the local stability of the disease-free equilibrium (DFE).\n\nTo the question “Is the rationale for developing the new method (or application) clearly explained?”, my answer is “No”. Indeed, there is some literature on the subject of HBV where age-structured model for the transmission dynamics of HBV is proposed with a focus on intervention options such as vaccination, prevention of HBV perinatal infections and treatment. See for example Djidjou-Demasse et al. (2016)1, Zhao et al. (2000)2, Zou et al. (2010)34 and references therein.\n\nTo the question “Is the description of the method technically sound?”, my answer is “No”. Well, authors derived two basic properties of the model dynamics: the existence of a DFE and the local stability. However, the manuscript is suffering from a lack of originality compared to the existing literature on the similar subjects. Moreover, according to the title of the manuscript, “Mathematical model and analysis of hepatitis B virus transmission dynamics”, it was expected to have more analysis materials in the manuscript.\n\nTo the question “Are the conclusions about the method and its performance adequately supported by the findings presented in the article?”, my answer is “No”. Authors stated that \"we use an infectious disease model to understand the impact of HB vaccination and treatment on the dynamics of HBV transmission and prevalence\". However, through authors analysis it is far from clear how they highlight the impact of vaccination and treatment on the HBV dynamics.\n\nAs a general comments, it is quite surprising how the authors tackle the question of vaccination of new babies without structuring the host population by age (or by age groups).\nAs a conclusion I Not Approved this paper for indexing.\n\nIs the rationale for developing the new method (or application) clearly explained? No\n\nIs the description of the method technically sound? No\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1312
|
https://f1000research.com/articles/7-1311/v1
|
17 Aug 18
|
{
"type": "Review",
"title": "Milestones achieved in response to drought stress through reverse genetic approaches",
"authors": [
"Baljeet Singh",
"Sarvjeet Kukreja",
"Umesh Goutam",
"Baljeet Singh",
"Sarvjeet Kukreja"
],
"abstract": "Drought stress is the most important abiotic stress that constrains crop production and reduces yield drastically. The germplasm of most of the cultivated crops possesses numerous unknown drought stress tolerant genes. Moreover, there are many reports suggesting that the wild species of most of the modern cultivars have abiotic stress tolerant genes. Due to climate change and population booms, food security has become a global issue. To develop drought tolerant crop varieties knowledge of various genes involved in drought stress is required. Different reverse genetic approaches such as virus-induced gene silencing (VIGS), clustered regularly interspace short palindromic repeat (CRISPR), targeting induced local lesions in genomes (TILLING) and expressed sequence tags (ESTs) have been used extensively to study the functionality of different genes involved in response to drought stress. In this review, we described the contributions of different techniques of functional genomics in the study of drought tolerant genes.",
"keywords": [
"VIGS",
"CRISPR",
"TILLING",
"ESTs",
"Drought stress",
"Climate change",
"Reverse Genetics",
"Functional genomics"
],
"content": "Introduction\n\nNowadays, global food security has becomes a major challenge due to the extreme changes to the climate and increases in the global population1. Therefore, plants are growing under various kinds of unfavourable environmental stresses such as drought, salinity, heat, cold and oxidative stresses which are retarding the growth and yield2,3. Of these, drought stress is the most predominant abiotic stress making this situation worse. Over the last decade, climate change has been increasing the frequency drought conditions and reduced the crop yield (Table 1) by affecting the basic plant growth processes such as seed germination, photosynthesis, source sink relationships, turgor pressure, cell division and elongation, enzyme activities, and secondary metabolites production14–24. In addition, drought can also increase the production and accumulation of reactive oxygen species (ROS) in plants which leads to oxidative stress too25,26. Several genes that express under drought conditions are involved in the regulation of all these processes and pathways. In recent years, many drought tolerant genes have been identified in major food crops and still there are numerous genes taking part in drought stress whose functions are unknown. With the help of available genomic and transcriptomic data reverse genetic approaches accelerated the investigations of gene function under different abiotic stresses27.\n\nFrom the perspective of crop improvement, transgenic approaches have been successfully used in many crops. However, development of stable transgenic lines is relatively expensive, time consuming and a laborious task. Moreover, it is not successful in many cultivated crops and slows down the investigations into specific gene28. In contrast, there are several techniques available for the study of these genes which give prompt results and have other advantages over transgenic techniques for analysis of target gene(s) such as virus-induced gene silencing (VIGS), clustered regularly interspace short palindromic repeat (CRISPR)-Cas9 system, targeting induced local lesions in genomes (TILLING) and expressed sequence tags (ESTs)29–32.\n\n\nVIGS\n\nIt is a simple, rapid, reliable and cost-effective post-transcriptional gene silencing (PTGS) technique for the study of endogenous genes. It is a powerful tool for the mining and study of genes involved in drought tolerance (Table 2). In VIGS a 200-400bp long fragment of the target gene is selected and cloned into a viral vector which infects the plant and triggers the silencing of that particular gene29,55. For efficient gene silencing the selection of the target fragment is very crucial. This technology can be used for forward and reverse genetics for both monocotyledonous and dicotyledonous plants56,57. There is no requirement of stable plant transformants in VIGS technology58. Moreover, a number of different genes can be studied simultaneously and a specific target can also be silenced individually through this technology59,60. Many VIGS vectors have been developed for different crops by modifying plant viruses and have been used successfully for the functional study of genes expressed under drought stress61–64. These VIGS vectors along with the target gene can be inoculated into the plants by different methods such as agrodrench, needleless syringe inoculation, agro inoculation, prick inoculation, and biolistic inoculation29,65.\n\nPlants have adopted many molecular mechanisms to withstand different abiotic stress, and a number of stress related genes get stimulated under stress conditions66,67. Among them, MAPKs (Mitogen Activated Protein Kinases) are the most important enzymes for the plant growth and development and also play an important role in signal transduction under extreme conditions68–72. The role of different MAPKs under drought stress has been studied through VIGS technology. The silencing of genes SpMAPK1, SpMAPK2, SpMAPK3 in Solanum pimpinellifolium, SlMPK in Solanum Lycopersicum and GhMKK3 in Gossypium hirsutum reduced the drought tolerance in silenced plants41,42,51.\n\nIn addition, various transcriptional factors regulate the plants behaviour in response to environmental conditions73. The WRKYs transcription factors play crucial role in the plant development under drought stress74. In cotton the VIGS of GhWRKY27a enhanced the tolerance against drought stress52. Further, another family of transcriptional factor, NAC, plays an important role under drought75. Silencing of the GhNAC79 and JUB1 genes in cotton and tomato respectively made the plants more sensitive to drought44,54. In addition, PbrMYB21 gene belonging to MYB family of transcription factors (TFs) studied in Pyrus betulaefolia. The PbrMYB21 silenced plants exhibited decreased drought tolerance in comparison to control plants45. Beside these, SR/CAMTA proteins from a small family of TFs and silencing of SISR1L and SlGRX1 genes from this resulted in decreased tolerance against drought stress in tomato43,53.\n\nBeside these, autophagy, a protein degradation process induced in plants in response to environmental stimuli, has been reported to be involved due to the involvement of autophagy-related genes (ATG) under drought stress76–78. The ATG8 gene in wheat and ATG6 and its orthologs get induced in wheat, rice and barley in response to multiple abiotic stresses. Barley stripe mosaic virus (BSMV) based VIGS system was used for their functionality under drought stress. The results indicated the active participation of ATG genes in various survival mechanisms used by plants under drought36,39. In spite of these, many drought tolerant genes have been reported in weeds and also wild species of major cultivars. For instance, ApDRI15 gene in a weed named, Alternanthera philoxeroidsi has been identified as a drought tolerant gene through VIGS79.\n\n\nExpressed sequence tags (ESTs)\n\nIt is a sequence based technique that can be used to identify or study genes. ESTs can be generated from cDNA libraries80. Functional studies of specific genes using this technique, can provide results in a cost-effective manner81. Large scale EST sequencing has been performed in various crops and in several crops it is in progress. Millions of ESTs of different crops are available at Expressed Sequence Tags Database of National Center of Biotechnology Information. To identify drought stress responsive genes, first cDNA libraries are developed from plants growing under stressed conditions or from drought challenged tissues of drought responsive genotypes. Then by sequencing the clones ESTs can be identified80,82. ESTs provide high quality transcripts for investigation of genes as functional markers under stress conditions. During the last two decades, drought responsive genes have been identified and studied by ESTs in a number of crops such as common beans83, barley84, chickpea85–88, sorghum89,90, rice91–94, Camelina sativa95, wheat96,97 Kodo millet98, pearl millet99,100, sweet potato101, rapeseed102, Peanut103, and Ammopiptanthus mongolicus104. Analysis by BLASTX or qRT-PCR can be performed to find the most promising ESTs82.\n\n\nTILLING\n\nWith the advancements in high-throughput techniques genomes of a large number of crops are available now which present a number of new opportunities for the application of traditional mutation based reverse genetic techniques105. TILLING is a nontransgenic method used to study allelic variations in the target gene in a mutant population and the effect of the mutant gene is studied from the changes in plant phenotypes28,106,107. It is a quick and comparatively cheap method for the screening of single nucleotide polymorphisms (SNP) in the target sequence. These point mutations in the target genes can be identified by PCR105,108. Moreover, this technique is applicable to any plant species whose genome sequence is available, regardless of its ploidy levels. In TILLING, to induce mutations in plant genome chemical mutagens are used that generated random mutations105. In most of the experiments, to generate the TILLING population ethymethansulfonate (EMS) is used as a mutagen30. However, to study the polymorphism developed, due to environmental conditions, a modified technique called as EcoTILLING has been developed. It seems a more promising strategy to study the genes related to abiotic stresses109,110.\n\n\nCRISPR Technology\n\nCRISPR (Clustered Regularly Interspace Short Palindromic Repeat)/ CRISPR-associated nuclease protein (Cas) 9 technology based upon plant antiviral defense mechanisms, offered various new opportunities for researchers. It is relatively simple, easy, less cytotoxic and very efficient targeted genome editing technology in comparison to traditional techniques used for the same purpose111,112. CRISPR/CAS9 based gene editing technology has become common practice in various labs. It involves the use of the CAS9 endonuclease, originally derived from Streptococcus pyogenes, and a guide RNA which leads CAS9 to the target sequence working together and generate double stranded DNA breaks which are later repaired by the error prone non-homologous end joining (NHEJ) method or by the homology directed repair (HDR) pathway113,114. Recently, this technology has been used extensively for crop improvement31,115–120. This system has been successfully used to study the genes involved in drought stress (Table 3) in model plant Arabidopsis121 and also in a number of crops such as soybean, maize114,122, rice123, tomato124.\n\n\nConclusion and future perspectives\n\nSevere droughts are becoming more common every year and are reducing crop yield considerably. There is an urgent need for drought tolerant varieties. Breeding and transgenic approaches could solve this problem but the knowledge of molecular mechanisms and genes taking part in drought tolerance is essential. Several reverse genetic techniques have proved their potential in many crops and some are still evolving. During the last decade, the genomes of several crops were successfully sequenced, various new VIGS systems have been developed for different crops104–131 and CRISPR has become the most powerful tool for genome editing126–131. Thus, these techniques can play a pivotal role in crop improvement and can contribute highly in the development of drought tolerant varieties.\n\n\nData availability\n\nNo data are associated with this article",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nLesk C, Rowhani P, Ramankutty N: Influence of extreme weather disasters on global crop production. Nature. 2016; 529(7584): 84–7. PubMed Abstract | Publisher Full Text\n\nNouri MZ, Moumeni A, Komatsu S: Abiotic Stresses: Insight into Gene Regulation and Protein Expression in Photosynthetic Pathways of Plants. Int J Mol Sci. 2015; 16(9): 20392–416. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMickelbart MV, Hasegawa PM, Bailey-Serres J: Genetic mechanisms of abiotic stress tolerance that translate to crop yield stability. Nat Rev Genet. 2015; 16(4): 237–51. PubMed Abstract | Publisher Full Text\n\nBalla K, Rakszegi M, Li Z, et al.: Quality of winter wheat in relation to heat and drought shock after anthesis. Czech J Food Sci. 2011; 29(2): 117–28. Publisher Full Text\n\nFahad S, Bajwa AA, Nazir U, et al.: Crop Production under Drought and Heat Stress: Plant Responses and Management Options. Front Plant Sci. 2017; 8: 1147. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDaryanto S, Wang L, Jacinthe PA: Global Synthesis of Drought Effects on Maize and Wheat Production. PLoS One. 2016; 11(5): e0156362. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLafitte HR, Yongsheng G, Yan S, et al.: Whole plant responses, key processes, and adaptation to drought stress: the case of rice. J Exp Bot. 2007; 58(2): 169–75. PubMed Abstract | Publisher Full Text\n\nKamara AY, Menkir A, Badu-Apraku B, et al.: The influence of drought stress on growth, yield and yield components of selected maize genotypes. J Agric Sci. 2003; 141(1): 43–50. Publisher Full Text\n\nNayyar H, Kaur S, Singh S, et al.: Differential sensitivity of Desi (small-seeded) and Kabuli (large-seeded) chickpea genotypes to water stress during seed filling: Effects on accumulation of seed reserves and yield. J Sci Food Agric. 2006; 86(13): 2076–82. Publisher Full Text\n\nSamarah NH, Mullen RE, Cianzio SR, et al.: Dehydrin-like proteins in soybean seeds in response to drought stress during seed filling. Crop Sci. 2006; 46(5): 2141–50. Publisher Full Text\n\nMazahery-Laghab H, Nouri F, et al.: Effects of the reduction of drought stress using supplementary irrigation for sunflower (Helianthus annuus) in dry farming conditions. Pajouheshva Sazandegi Agron Hortic. 2003; 59: 81–6. Reference Source\n\nDeblonde PMK, Ledent JF: Effects of moderate drought conditions on green leaf number, stem height, leaf length and tuber yield of potato cultivars. Eur J Agron. 2001; 14(1): 31–41. Publisher Full Text\n\nJamieson PD, Martin RJ, Francis GS: Drought influences on grain yield of barley, wheat, and maize. New Zeal J Crop Hortic Sci. 1995; 23(1): 55–66. Publisher Full Text\n\nBasu S, Ramegowda V, Kumar A, et al.: Plant adaptation to drought stress [version 1; referees: 3 approved]. F1000Res. 2016; 5: pii: F1000 Faculty Rev-1554. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTietjen B, Schlaepfer DR, Bradford JB, et al.: Climate change-induced vegetation shifts lead to more ecological droughts despite projected rainfall increases in many global temperate drylands. Glob Chang Biol. 2017; 23(7): 2743–54. PubMed Abstract | Publisher Full Text\n\nFlexas J, Bota J, Loreto F, et al.: Diffusive and metabolic limitations to photosynthesis under drought and salinity in C3 plants. Plant Biol (Stuttg). 2004; 6(3): 269–79. PubMed Abstract | Publisher Full Text\n\nLobell DB, Schlenker W, Costa-Roberts J: Climate trends and global crop production since 1980. Science. 2011; 333(6042): 616–20. PubMed Abstract | Publisher Full Text\n\nSi C, Zhang JY, Xu HC: [Advances in studies on growth metabolism and response mechanisms of medicinal plants under drought stress]. Zhongguo Zhong Yao Za Zhi. 2014; 39(13): 2432–7. PubMed Abstract\n\nYordanov I, Velikova V, Tsonev T: Plant responses to drought, acclimation, and stress tolerance. Photosynthetica. 2000; 38(2): 171–86. Publisher Full Text\n\nBarnabás B, Jäger K, Fehér A: The effect of drought and heat stress on reproductive processes in cereals. Plant Cell Environ. 2008; 31(1): 11–38. PubMed Abstract | Publisher Full Text\n\nKaya MD, Okçu G, Atak M, et al.: Seed treatments to overcome salt and drought stress during germination in sunflower (Helianthus annuus L.). Eur J Agron. 2006; 24(4): 291–5. Publisher Full Text\n\nFarooq M, Wahid A, Kobayashi N, et al.: Plant drought stress: effects, mechanisms and management. Agron Sustain Dev. 2009; 29(1): 185–212. Publisher Full Text\n\nHussain M, Malik MA, Farooq M, et al.: Improving drought tolerance by exogenous application of glycinebetaine and salicylic acid in sunflower. J Agron Crop Sci. 2008; 194(3): 193–9. Publisher Full Text\n\nPraba ML, Cairns JE, Babu RC, et al.: Identification of physiological traits underlying cultivar differences in drought tolerance in rice and wheat. J Agron Crop Sci. 2009; 195(1): 30–46. Publisher Full Text\n\nPastori GM, Foyer CH: Common components, networks, and pathways of cross-tolerance to stress. The central role of \"redox\" and abscisic acid-mediated controls. Plant Physiol. 2002; 129(2): 460–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNagahatenna DS, Langridge P, Whitford R: Tetrapyrrole-based drought stress signalling. Plant Biotechnol J. 2015; 13(4): 447–59. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAzevedo H, Silva-Correia J, Oliveira J, et al.: A strategy for the identification of new abiotic stress determinants in Arabidopsis using web-based data mining and reverse genetics. OMICS. 2011; 15(12): 935–47. PubMed Abstract | Publisher Full Text\n\nSlade AJ, Knauf VC: TILLING moves beyond functional genomics into crop improvement. Transgenic Res. 2005; 14(2): 109–15. PubMed Abstract | Publisher Full Text\n\nSenthil-Kumar M, Mysore KS: Tobacco rattle virus-based virus-induced gene silencing in Nicotiana benthamiana. Nat Protoc. 2014; 9(7): 1549–62. PubMed Abstract | Publisher Full Text\n\nTadele Z: Drought Adaptation in Millets. In: Abiotic and Biotic Stress in Plants - Recent Advances and Future Perspectives. 2016. Publisher Full Text\n\nKhatodia S, Bhatotia K, Passricha N, et al.: The CRISPR/Cas Genome-Editing Tool: Application in Improvement of Crops. Front Plant Sci. 2016; 7: 506. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMohanta TK, Bashir T, Hashem A, et al.: Genome Editing Tools in Plants. Genes (Basel). 2017; 8(12): pii: E399. PubMed Abstract | Publisher Full Text | Free Full Text\n\nManmathan H, Shaner D, Snelling J, et al.: Virus-induced gene silencing of Arabidopsis thaliana gene homologues in wheat identifies genes conferring improved drought tolerance. J Exp Bot. 2013; 64(5): 1381–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKang G, Li G, Ma H, et al.: Proteomic analysis on the leaves of TaBTF3 gene virus-induced silenced wheat plants may reveal its regulatory mechanism. J Proteomics. 2013; 83: 130–43. PubMed Abstract | Publisher Full Text\n\nWang Y, He X, Ma W, et al.: Wheat PROTON GRADIENT REGULATION 5 is involved in tolerance to photoinhibition. J Integr Agric. 2014; 13(6): 1206–15. Publisher Full Text\n\nKuzuoglu-Ozturk D, Cebeci Yalcinkaya O, Akpinar BA, et al.: Autophagy-related gene, TdAtg8, in wild emmer wheat plays a role in drought and osmotic stress response. Planta. 2012; 236(4): 1081–92. PubMed Abstract | Publisher Full Text\n\nLiang J, Deng G, Long H, et al.: Virus-induced silencing of genes encoding LEA protein in Tibetan hulless barley (Hordeum vulgare ssp. vulgare) and their relationship to drought tolerance. Mol Breed. 2012; 30(1): 441–51. Publisher Full Text\n\nHe X, Zeng J, Cao F, et al.: HvEXPB7, a novel β-expansin gene revealed by the root hair transcriptome of Tibetan wild barley, improves root hair growth under drought stress. J Exp Bot. 2015; 66(22): 7405–19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZeng X, Zeng Z, Liu C, et al.: A barley homolog of yeast ATG6 is involved in multiple abiotic stress responses and stress resistance regulation. Plant Physiol Biochem. 2017; 115: 97–106. PubMed Abstract | Publisher Full Text\n\nSenthil-Kumar M, Udayakumar M: High-throughput virus-induced gene-silencing approach to assess the functional relevance of a moisture stress-induced cDNA homologous to lea4. J Exp Bot. 2006; 57(10): 2291–302. PubMed Abstract | Publisher Full Text\n\nLi C, Yan JM, Li YZ, et al.: Silencing the SpMPK1, SpMPK2, and SpMPK3 genes in tomato reduces abscisic acid-mediated drought tolerance. Int J Mol Sci. 2013; 14(11): 21983–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVirk N, Liu B, Zhang H, et al.: Tomato SlMPK4 is required for resistance against Botrytis cinerea and tolerance to drought stress. Acta Physiol Plant. 2013; 35(4): 1211–21. Publisher Full Text\n\nLi X, Huang L, Zhang Y, et al.: Tomato SR/CAMTA transcription factors SlSR1 and SlSR3L negatively regulate disease resistance response and SlSR1L positively modulates drought stress tolerance. BMC Plant Biol. 2014; 14(1): 286. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThirumalaikumar VP, Devkar V, Mehterov N, et al.: NAC transcription factor JUNGBRUNNEN1 enhances drought tolerance in tomato. Plant Biotechnol J. 2018; 16(2): 354–366. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi K, Xing C, Yao Z, et al.: PbrMYB21, a novel MYB protein of Pyrus betulaefolia, functions in drought tolerance and modulates polyamine levels by regulating arginine decarboxylase gene. Plant Biotechnol J. 2017; 15(9): 1186–203. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChoi HW, Hwang BK: The pepper extracellular peroxidase CaPO2 is required for salt, drought and oxidative stress tolerance as well as resistance to fungal pathogens. Planta. 2012; 235(6): 1369–82. PubMed Abstract | Publisher Full Text\n\nLim CW, Lee SC: Functional roles of the pepper MLO protein gene, CaMLO2, in abscisic acid signaling and drought sensitivity. Plant Mol Biol. 2014; 85(1–2): 1–10. PubMed Abstract | Publisher Full Text\n\nPark C, Lim CW, Baek W, et al.: RING Type E3 Ligase CaAIR1 in Pepper Acts in the Regulation of ABA Signaling and Drought Stress Response. Plant Cell Physiol. 2015; 56(9): 1808–19. PubMed Abstract | Publisher Full Text\n\nPark C, Lim CW, Lee SC: The pepper RING-Type E3 ligase, CaAIP1, functions as a positive regulator of drought and high salinity stress responses. Plant Cell Physiol. 2016; 57(10): 2202–12. PubMed Abstract | Publisher Full Text\n\nPark C, Lim WC, Baek W, et al.: The pepper WPP domain protein, CaWDP1, acts as a novel negative regulator of drought stress via ABA signaling. Plant Cell Physiol. 2017; 58(4): 779–88. PubMed Abstract | Publisher Full Text\n\nWang C, Lu W, He X, et al.: The Cotton Mitogen-Activated Protein Kinase Kinase 3 Functions in Drought Tolerance by Regulating Stomatal Responses and Root Growth. Plant Cell Physiol. 2016; 57(8): 1629–42. PubMed Abstract | Publisher Full Text\n\nYan Y, Jia H, Wang F, et al.: Overexpression of GhWRKY27a reduces tolerance to drought stress and resistance to Rhizoctonia solani infection in transgenic Nicotiana benthamiana. Front Physiol. 2015; 6: 265. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuo Y, Huang C, Xie Y, et al.: A tomato glutaredoxin gene SlGRX1 regulates plant responses to oxidative, drought and salt stresses. Planta. 2010; 232(6): 1499–509. PubMed Abstract | Publisher Full Text\n\nGuo Y, Pang C, Jia X, et al.: An NAM Domain Gene, GhNAC79, Improves Resistance to Drought Stress in Upland Cotton. Front Plant Sci. 2017; 8: 1657. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTasaki K, Terada H, Masuta C, et al.: Virus-induced gene silencing (VIGS) in Lilium leichtlinii using the Cucumber mosaic virus vector. Plant Biotechnol. 2016; 33(5): 373–81. Publisher Full Text\n\nRamegowda V, Mysore KS, Senthil-Kumar M: Virus-induced gene silencing is a versatile tool for unraveling the functional relevance of multiple abiotic-stress-responsive genes in crop plants. Front Plant Sci. 2014; 5: 323. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBecker A, Lange M: VIGS--genomics goes functional. Trends Plant Sci. 2010; 15(1): 1–4. PubMed Abstract | Publisher Full Text\n\nSahu PP, Puranik S, Khan M, et al.: Recent advances in tomato functional genomics: utilization of VIGS. Protoplasma. 2012; 249(4): 1017–27. PubMed Abstract | Publisher Full Text\n\nPurkayastha A, Dasgupta I: Virus-induced gene silencing: a versatile tool for discovery of gene functions in plants. Plant Physiol Biochem. 2009; 47(11–12): 967–76. PubMed Abstract | Publisher Full Text\n\nFernandez-Pozo N, Rosli HG, Martin GB, et al.: The SGN VIGS tool: user-friendly software to design virus-induced gene silencing (VIGS) constructs for functional genomics. Mol Plant. 2015; 8(3): 486–8. PubMed Abstract | Publisher Full Text\n\nRatcliff F, Martin-Hernandez AM, Baulcombe DC, et al.: Technical Advance. Tobacco rattle virus as a vector for analysis of gene function by silencing. Plant J. 2001; 25(2): 237–45. PubMed Abstract | Publisher Full Text\n\nRobertson D: VIGS vectors for gene silencing: many targets, many tools. Annu Rev Plant Biol. 2004; 55(1): 495–519. PubMed Abstract | Publisher Full Text\n\nIgarashi A, Yamagata K, Sugai T, et al.: Apple latent spherical virus vectors for reliable and effective virus-induced gene silencing among a broad range of plants including tobacco, tomato, Arabidopsis thaliana, cucurbits, and legumes. Virology. 2009; 386(2): 407–16. PubMed Abstract | Publisher Full Text\n\nLange M, Yellina AL, Orashakova S, et al.: Virus-induced gene silencing (VIGS) in plants: an overview of target species and the virus-derived vector systems. Methods Mol Biol. 2013; 975: 1–14. PubMed Abstract | Publisher Full Text\n\nCorbin C, Lafontaine F, Sepúlveda LJ, et al.: Virus-induced gene silencing in Rauwolfia species. Protoplasma. 2017; 254(4): 1813–8. PubMed Abstract | Publisher Full Text\n\nGao JP, Chao DY, Lin HX: Toward Understanding Molecular Mechanisms of Abiotic Stress Responses in Rice. Rice. 2008; 1(1): 36–51. Publisher Full Text\n\nShivani, Dwivedi DK, Husain R, et al.: Physiological, Morphological and Molecular Mechanisms for Drought Tolerance in Rice. Int J Curr Microbiol Appl Sci. 2017; 6(7): 4160–73. Publisher Full Text\n\nAra H, Sinha AK: Conscientiousness of mitogen activated protein kinases in acquiring tolerance for abiotic stresses in plants. Proc Indian Natl Sci Acad. 2014; 80(2): 211–9. Reference Source\n\nNakagami H, Pitzschke A, Hirt H: Emerging MAP kinase pathways in plant stress signalling. Trends Plant Sci. 2005; 10(7): 339–46. PubMed Abstract | Publisher Full Text\n\nPitzschke A, Schikora A, Hirt H: MAPK cascade signalling networks in plant defence. Curr Opin Plant Biol. 2009; 12(4): 421–6. PubMed Abstract | Publisher Full Text\n\nZhang S, Klessig DF: MAPK cascades in plant defense signaling. Trends Plant Sci. 2001; 6(11): 520–7. PubMed Abstract | Publisher Full Text\n\nRomeis T: Protein kinases in the plant defence response. Curr Opin Plant Biol. 2001; 4(5): 407–14. PubMed Abstract | Publisher Full Text\n\nGuo R, Yu F, Gao Z, et al.: GhWRKY3, a novel cotton (Gossypium hirsutum L.) WRKY gene, is involved in diverse stress responses. Mol Biol Rep. 2011; 38(1): 49–58. PubMed Abstract | Publisher Full Text\n\nZhang T, Huang L, Wang Y, et al.: Differential transcriptome profiling of chilling stress response between shoots and rhizomes of Oryza longistaminata using RNA sequencing. PLoS One. 2017; 12(11): e0188625. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNuruzzaman M, Sharoni AM, Kikuchi S: Roles of NAC transcription factors in the regulation of biotic and abiotic stress responses in plants. Front Microbiol. 2013; 4: 248. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu Y, Xiong Y, Bassham DC: Autophagy is required for tolerance of drought and salt stress in plants. Autophagy. 2009; 5(7): 954–63. PubMed Abstract | Publisher Full Text\n\nNolan TM, Brennan B, Yang M, et al.: Selective Autophagy of BES1 Mediated by DSK2 Balances Plant Growth and Survival. Dev Cell. 2017; 41(1): 33–46.e7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang W, Xu M, Wang G, et al.: Autophagy: An Important Biological Process That Protects Plants from Stressful Environments. Front Plant Sci. 2017; 7: 2030. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBai C, Wang P, Fan Q, et al.: Analysis of the Role of the Drought-Induced Gene DRI15 and Salinity-Induced Gene SI1 in Alternanthera philoxeroides Plasticity Using a Virus-Based Gene Silencing Tool. Front Plant Sci. 2017; 8: 1579. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlair MW, Hurtado N, Chavarro CM, et al.: Gene-based SSR markers for common bean (Phaseolus vulgaris L.) derived from root and leaf tissue ESTs: an integration of the BMc series. BMC Plant Biol. 2011; 11: 50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBouchez D, Höfte H: Functional genomics in plants. Plant Physiol. 1998; 118(3): 725–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMir RR, Zaman-Allah M, Sreenivasulu N, et al.: Integrated genomics, physiology and breeding approaches for improving drought tolerance in crops. Theor Appl Genet. 2012; 125(4): 625–45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYao LM, Wang B, Cheng LJ, et al.: Identification of key drought stress-related genes in the hyacinth bean. PLoS One. 2013; 8(3): e58108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRadwan A, Ali RMIA, Nada A, et al.: Isolation and characterization of some drought-related ESTs from barley. African J Biotechnol. 2015; 14(9): 794–810. Publisher Full Text\n\nRamalingam A, Kudapa H, Pazhamala LT, et al.: Gene Expression and Yeast Two-Hybrid Studies of 1R-MYB Transcription Factor Mediating Drought Stress Response in Chickpea (Cicer arietinum L.). Front Plant Sci. 2015; 6: 1117. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeokar AA, Kondawar V, Jain PK, et al.: Comparative analysis of expressed sequence tags (ESTs) between drought-tolerant and -susceptible genotypes of chickpea under terminal drought stress. BMC Plant Biol. 2011; 11: 70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVarshney RK, Hiremath PJ, Lekha P, et al.: A comprehensive resource of drought- and salinity- responsive ESTs for gene discovery and marker development in chickpea (Cicer arietinum L.). BMC Genomics. 2009; 10: 523. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJain D, Chattopadhyay D: Analysis of gene expression in response to water deficit of chickpea (Cicer arietinum L.) varieties differing in drought tolerance. BMC Plant Biol. 2010; 10: 24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoldesemayat AA, Van Heusden P, Ndimba BK, et al.: An integrated and comparative approach towards identification, characterization and functional annotation of candidate genes for drought tolerance in sorghum (Sorghum bicolor (L.) Moench). BMC Genet. 2017; 18(1): 119. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSrinivas G, Satish K, Madhusudhana R, et al.: Exploration and mapping of microsatellite markers from subtracted drought stress ESTs in Sorghum bicolor (L.) Moench. Theor Appl Genet. 2009; 118(4): 703–17. PubMed Abstract | Publisher Full Text\n\nNie YY, Zhang L, Wu YH, et al.: Retracted: Screening of candidate genes and fine mapping of drought tolerance quantitative trait loci on chromosome 4 in rice (Oryza sativa L.) under drought stress. Ecol Evol. 2015; 5(21): 5007–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXia H, Zheng X, Chen L, et al.: Genetic differentiation revealed by selective loci of drought-responding EST-SSRs between upland and lowland rice in China. PLoS One. 2014; 9(10): e106352. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGorantla M, Babu PR, Reddy Lachagari VB, et al.: Identification of stress-responsive genes in an indica rice (Oryza sativa L.) using ESTs generated from drought-stressed seedlings. J Exp Bot. 2007; 58(2): 253–65. PubMed Abstract | Publisher Full Text\n\nHadiarto T, Tran LS: Progress studies of drought-responsive genes in rice. Plant Cell Rep. 2011; 30(3): 297–310. PubMed Abstract | Publisher Full Text\n\nKanth BK, Kumari S, Choi SH, et al.: Generation and analysis of expressed sequence tags (ESTs) of Camelina sativa to mine drought stress-responsive genes. Biochem Biophys Res Commun. 2015; 467(1): 83–93. PubMed Abstract | Publisher Full Text\n\nChen ZY, Guo XJ, Chen ZX, et al.: Genome-wide characterization of developmental stage- and tissue-specific transcription factors in wheat. BMC Genomics. 2015; 16(1): 125. PubMed Abstract | Publisher Full Text | Free Full Text\n\nErgen NZ, Budak H: Sequencing over 13 000 expressed sequence tags from six subtractive cDNA libraries of wild and modern wheats following slow drought stress. Plant Cell Environ. 2009; 32(3): 220–36. PubMed Abstract | Publisher Full Text\n\nSiddappa N, Raghu GK, Devaraj VR: Identification of Drought-Responsive Transcripts in Kodo Millet (Paspalumscrobiculatum L). Int J Innov Res Dev. 2016; 5(11). Reference Source\n\nShivhare R, Lata C: Exploration of Genetic and Genomic Resources for Abiotic and Biotic Stress Tolerance in Pearl Millet. Front Plant Sci. 2017; 7: 2069. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChoudhary M, Jayanand, Padaria JC: Transcriptional profiling in pearl millet (Pennisetum glaucum L.R. Br.) for identification of differentially expressed drought responsive genes. Physiol Mol Biol Plants. 2015; 21(2): 187–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim YH, Jeong JC, Lee HS, et al.: Comparative characterization of sweetpotato antioxidant genes from expressed sequence tags of dehydration-treated fibrous roots under different abiotic stress conditions. Mol Biol Rep. 2013; 40(4): 2887–96. PubMed Abstract | Publisher Full Text\n\nShamloo-Dashtpagerdi R, Razi H, Ebrahimie E: Mining expressed sequence tags of rapeseed (Brassica napus L.) to predict the drought responsive regulatory network. Physiol Mol Biol Plants. 2015; 21(3): 329–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPruthvi V, Rama N, Govind G, et al.: Expression analysis of drought stress specific genes in Peanut (Arachis hypogaea, L.). Physiol Mol Biol Plants. 2013; 19(2): 277–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu M, Shi J, Lu C: Identification of stress-responsive genes in Ammopiptanthus mongolicus using ESTs generated from cold- and drought-stressed seedlings. BMC Plant Biol. 2013; 13(1): 88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKurowska M, Daszkowska-Golec A, Gruszka D, et al.: TILLING: a shortcut in functional genomics. J Appl Genet. 2011; 52(4): 371–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoens CB, Donn TM, Wolf-Saxon ER, et al.: Reverse genetics in zebrafish by TILLING. Brief Funct Genomic Proteomic. 2008; 7(6): 454–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDwivedi SL, Scheben A, Edwards D, et al.: Assessing and Exploiting Functional Diversity in Germplasm Pools to Enhance Abiotic Stress Adaptation and Yield in Cereals and Food Legumes. Front Plant Sci. 2017; 8: 1461. PubMed Abstract | Publisher Full Text | Free Full Text\n\nComai L, Henikoff S: TILLING: practical single-nucleotide mutation discovery. Plant J. 2006; 45(4): 684–94. PubMed Abstract | Publisher Full Text\n\nAkpinar BA, Lucas SJ, Budak H: Genomics approaches for crop improvement against abiotic stress. ScientificWorldJournal. 2013; 2013: 361921. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu S, Liao F, Wang F, et al.: Identification of rice transcription factors associated with drought tolerance using the Ecotilling method. PLoS One. 2012; 7(2): e30765. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQi X, Liu C, Song L, et al.: PaCYP78A9, a Cytochrome P450, Regulates Fruit Size in Sweet Cherry (Prunus avium L.). Front Plant Sci. 2017; 8: 2076. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRomay G, Bragard C: Antiviral Defenses in Plants through Genome Editing. Front Microbiol. 2017; 8: 47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhao Y, Zhang C, Liu W, et al.: An alternative strategy for targeted gene replacement in plants using a dual-sgRNA/Cas9 design. Sci Rep. 2016; 6: 23890. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShi J, Gao H, Wang H, et al.: ARGOS8 variants generated by CRISPR-Cas9 improve maize grain yield under field drought stress conditions. Plant Biotechnol J. 2017; 15(2): 207–16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSun Y, Zhang X, Wu C, et al.: Engineering Herbicide-Resistant Rice Plants through CRISPR/Cas9-Mediated Homologous Recombination of Acetolactate Synthase. Mol Plant. 2016; 9(4): 628–631. PubMed Abstract | Publisher Full Text\n\nCardi T, D’Agostino N, Tripodi P: Genetic Transformation and Genomic Resources for Next-Generation Precise Genome Engineering in Vegetable Crops. Front Plant Sci. 2017; 8: 241. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArora L, Narula A: Gene Editing and Crop Improvement Using CRISPR-Cas9 System. Front Plant Sci. 2017; 8: 1932. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang K, Raboanatahiry N, Zhu B, et al.: Progress in Genome Editing Technology and Its Application in Plants. Front Plant Sci. 2017; 8: 177. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCao HX, Wang W, Le HT, et al.: The Power of CRISPR-Cas9-Induced Genome Editing to Speed Up Plant Breeding. Int J Genomics. 2016; 2016: 5078796. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNoman A, Aqeel M, He S: CRISPR-Cas9: Tool for Qualitative and Quantitative Plant Genome Editing. Front Plant Sci. 2016; 7: 1740. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi P, Li YJ, Zhang FJ, et al.: The Arabidopsis UDP-glycosyltransferases UGT79B2 and UGT79B3, contribute to cold, salt and drought stress tolerance via modulating anthocyanin accumulation. Plant J. 2017; 89(1): 85–103. PubMed Abstract | Publisher Full Text\n\nChilcoat D, Liu ZB, Sander J: Use of CRISPR/Cas9 for Crop Improvement in Maize and Soybean. Prog Mol Biol Transl Sci. 2017; 149: 27–46. PubMed Abstract | Publisher Full Text\n\nLou D, Wang H, Liang G, et al.: OsSAPK2 Confers Abscisic Acid Sensitivity and Tolerance to Drought Stress in Rice. Front Plant Sci. 2017; 8: 993. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang L, Chen L, Zhao R, et al.: Reduced Drought Tolerance by CRISPR/Cas9-Mediated SlMAPK3 Mutagenesis in Tomato Plants. J Agric Food Chem. 2017; 65(39): 8674–8682. PubMed Abstract | Publisher Full Text\n\nXu C, Fu X, Liu R, et al.: PtoMYB170 positively regulates lignin deposition during wood formation in poplar and confers drought tolerance in transgenic Arabidopsis. Tree Physiol. 2017; 37(12): 1713–26. PubMed Abstract | Publisher Full Text\n\nKumar J, Gunapati S, Kumar J, et al.: Virus-induced gene silencing using a modified betasatellite: a potential candidate for functional genomics of crops. Arch Virol. 2014; 159(8): 2109–13. PubMed Abstract | Publisher Full Text\n\nKushwaha NK, Chakraborty S: Chilli leaf curl virus-based vector for phloem-specific silencing of endogenous genes and overexpression of foreign genes. Appl Microbiol Biotechnol. 2017; 101(5): 2121–9. PubMed Abstract | Publisher Full Text\n\nIdo Y, Nakahara KS, Uyeda I: White clover mosaic virus-induced gene silencing in pea. J Gen Plant Pathol. 2012; 78(2): 127–32. Publisher Full Text\n\nLiou MR, Huang YW, Hu CC, et al.: A dual gene-silencing vector system for monocot and dicot plants. Plant Biotechnol J. 2014; 12(3): 330–43. PubMed Abstract | Publisher Full Text\n\nMei Y, Zhang C, Kernodle BM, et al.: A Foxtail mosaic virus Vector for Virus-Induced Gene Silencing in Maize. Plant Physiol. 2016; 171(2): 760–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYamagishi M, Masuta C, Suzuki M, et al.: Peanut stunt virus-induced gene silencing in white lupin (Lupinus albus). Plant Biotechnol. 2015; 32(3): 181–91. Publisher Full Text\n\nYang N, Wang R, Zhao Y: Revolutionize Genetic Studies and Crop Improvement with High-Throughput and Genome-Scale CRISPR/Cas9 Gene Editing Technology. Mol Plant. 2017; 10(9): 1141–1143. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHussain B, Lucas SJ, Budak H: CRISPR/Cas9 in plants: at play in the genome and at work for crop improvement. Brief Funct Genomics. 2018. PubMed Abstract | Publisher Full Text\n\nGao C: The future of CRISPR technologies in agriculture. Nat Rev Mol Cell Biol. 2018; 19(5): 275–276. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "38198",
"date": "31 Oct 2018",
"name": "Elangovan Mani",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle: Ok, it represents the article well Abstract: Ok, it provide the central idea about the article precisely. However, authors can expand it a little more. Keywords: Authors should arrange them alphabetically Introduction: At the end of first paragraph, authors suggested about available transcriptomic data. I believe there they can add some references of available transcriptomes related to the topic. Overall, it has been written fine. VIGS: Authors provided all the required information related to this technology. However, there are some bioinformatics tools, which helps in the selection of target fragment within the gene. They can add these too. Expressed sequence tags (ESTs): Ok TILLING: Ok CRISPR Technology: Ok Conclusion: Ok, Nicely written.\nTo summarize, the review by Singh et al. presents our current knowledge of many genes deciphered by reverse genetic technologies: VIGS, EST, TILLING and CRISPR. The tables represents ample amount of information in a well-organised way. Overall, the Review provides a useful compilation of subject matter related to addressed topic in a coherent way.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
},
{
"id": "40635",
"date": "10 Dec 2018",
"name": "Pardeep Kumar Bhardwaj",
"expertise": [
"Reviewer Expertise Plant Molecular Biology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe review entitled “Milestones achieved in response to drought stress through reverse genetic approaches” by Singh et al., presents recent advances in understanding the response of drought stress in crop plants using reverse genetic technologies. It is well known that drought stress is major concern in the era of climate change. Therefore, it is important to address the recent updates on drought stress response in crop plants to scientific communities. The review article is written and organized very well but minor points need to be taken care.\nAuthors have explained several techniques available to study the functionality of different genes involved in response to drought stress but should also include the advantages of these techniques in monocots/dicots.\n\nIn VIGS, authors should explain the functional analysis of DREB transcription factors using VIGS technology citing some latest references.\n\nIn ESTs analysis, authors should include the analysis of drought responsive ESTs generated through chemical priming studied in crop plants.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1311
|
https://f1000research.com/articles/6-1166/v1
|
21 Jul 17
|
{
"type": "Research Article",
"title": "Proteolytic processing of the L-type Ca2+ channel alpha11.2 subunit in neurons",
"authors": [
"Olivia R. Buonarati",
"Peter B. Henderson",
"Geoffrey G. Murphy",
"Mary C. Horne",
"Johannes W. Hell",
"Olivia R. Buonarati",
"Peter B. Henderson",
"Geoffrey G. Murphy",
"Mary C. Horne"
],
"abstract": "Background: The L-type Ca2+ channel Cav1.2 is a prominent regulator of neuronal excitability, synaptic plasticity, and gene expression. The central element of Cav1.2 is the pore-forming α11.2 subunit. It exists in two major size forms, whose molecular masses have proven difficult to precisely determine. Recent work suggests that α11.2 is proteolytically cleaved between the second and third of its four pore-forming domains (Michailidis et al,. 2014). Methods: To better determine the apparent molecular masses (MR)of the α11.2 size forms, extensive systematic immunoblotting of brain tissue as well as full length and C-terminally truncated α11.2 expressed in HEK293 cells was conducted using six different region–specific antibodies against α11.2. Results: The full length form of α11.2 migrated, as expected, with an apparent MR of ~250 kDa. A shorter form of comparable prevalence with an apparent MR of ~210 kDa could only be detected in immunoblots probed with antibodies recognizing α11.2 at an epitope 400 or more residues upstream of the C-terminus. Conclusions: The main two size forms of α11.2 are the full length form and a shorter form, which lacks ~350 distal C-terminal residues. Midchannel cleavage as suggested by Michailidis et al. (2014) is at best minimal in brain tissue.",
"keywords": [
"Cav1.2",
"calpain cleavage",
"neuronal calcium"
],
"content": "Introduction\n\nL-type Ca2+ channels are critical regulators of neuronal excitability (Berkefeld et al., 2006; Marrion & Tavalin, 1998), gene expression (Dolmetsch et al., 2001; Graef et al., 1999; Li et al., 2012; Ma et al., 2014; Marshall et al., 2011; Murphy et al., 2014; Wheeler et al., 2012), long-term potentiation (LTP) (Boric et al., 2008; Grover & Teyler, 1990; Moosmang et al., 2005; Patriarchi et al., 2016; Qian et al., 2017), long-term depression (LTD) (Bernard et al., 2014; Bolshakov & Siegelbaum, 1994), and memory consolidation (White et al., 2008). Cav1.2 is the most abundant L-type channel in the brain and heart (Hell et al., 1993a; Sinnegger-Brauns et al., 2004). The multitude of Cav1.2-dependent functions is illustrated by diseases such as Timothy Syndrome, which arises from one of three single missense mutations in exon 8/8A of the CACNA1C gene encoding the central, ion-conducting α11.2 subunit. Symptoms of this rare autosomal dominant disorder manifest as syndactyly, autistic-like behaviors, and widespread organ dysfunctions including dysregulation of cardiac contractility and heart rate (Splawski et al., 2004).\n\nThe central subunit of Cav1.2 that forms the ion-conducting pore, α11.2, exists in two major size forms with molecular masses estimated to be between 230-250 and 190-210 kDa (Bunemann et al., 1999; Davare et al., 1999; Davare & Hell, 2003; Davare et al., 2000; De Jongh et al., 1996; Hell et al., 1996; Hell et al., 1993a; Hell et al., 1995; Hell et al., 1993b; Kochlamazashvili et al., 2010; Patriarchi et al., 2016; Qian et al., 2017). CACNA1C was first cloned from rabbit heart, where full length α11.2 consists of 2171 residues with a predicted MR of 243 kDa (Mikami et al., 1989). Differential splicing of exons encoding the N-terminus of α11.2 and a number of other CACNA1C exons can result in isoforms that vary by 30 or more residues in length (Liao et al., 2015; Snutch et al., 1991, and references therein). Determination of the precise sizes of these α11.2 variants by SDS-PAGE is hampered by the fact that even a small increase in the concentration of acrylamide from 5 to 6 percent causes a strong change in migration of the two size forms (Hell et al., 1993b). These observations indicate that the migration behavior of α11.2 during SDS-PAGE can be anomalous.\n\nSeveral studies over the past two decades detail the regulatory importance of calpain-mediated proteolysis at the α11.2 distal C-terminus (DCT) (Fuller et al., 2010; Hell et al., 1996; Hell et al., 1993b; Hulme et al., 2006b). For instance, deletion of 300-470 residues from the C terminus resulted in a 4-6 fold increase in current density without an increase in gating currents when expressed in Xenopus oocytes (Wei et al., 1994). These findings suggest that the potentiation due to C-terminal deletions is not caused by increased surface expression of Cav1.2, but by an increase in coupling of depolarization-induced movement of the voltage sensors to pore opening (Wei et al., 1994). Similarly, truncating α11.2 after residues 1733, 1821, 1905, and 2024 increased current density in HEK293-derived tsA201 cells by several fold, which was reversed by co-expression or injection of distal fragments as separate polypeptides (Fuller et al., 2010; Gao et al., 2001; Hulme et al., 2006b). Further deletions at or before residue 1623 abrogated channel currents, consistent with earlier work identifying residues 1623-1666 as critical for Cav1.2 surface expression (Gao et al., 2000). These latter findings are also in agreement with recent observations, in which binding of α-actinin to this region is important for Cav1.2 surface expression (Hall et al., 2013; Tseng et al., 2017).\n\nEarlier evidence indicates that the 190-210 kDa short form results from proteolytic processing of the long form by the Ca2+-stimulated protease calpain (Hell et al., 1996). More recent work has suggested that extensive proteolytic processing occurs via calpain- and ubiquitin/proteasome-mediated mechanisms that target the intracellular loop between domains II and III, yielding two prominent α11.2 fragments: a 90 kDa fragment that might consist of the N-terminus and the first two integral membrane domains I and II, and a 150 kDa fragment that might consist of domains III and IV and the long C-terminus (Michailidis et al., 2014).\n\nWe performed a long overdue, systematic analysis of α11.2 size forms using region-specific antibodies, increasing concentrations of acrylamide, and surface biotinylation to examine their migration behavior during SDS-PAGE. As expected, one of the two main size forms of α11.2 migrates according to an apparent MR of 250 kDa, corresponding very well with the predicted size of the full length subunit. Importantly our study also provides very consistent and clear evidence that extensive proteolytic processing of α11.2 occurs within the last ~660 C-terminal residues, with minimal cleavage in the middle of the pore-forming portion of the channel. Although removal of the DCT would be expected to increase channel currents (Fuller et al., 2010; Wei et al., 1994), the severed DCT remains associated with the main channel portion to maintain a reduction of channel activity (Fuller et al., 2010; Gao et al., 2001; Hulme et al., 2006b).\n\n\nMaterials and methods\n\nWe used 6–12 week old 50% C57black/6N and 50% 129Sv hybrid mice (Jackson Laboratories, Bar Harbor, MN), α11.2 conditional knockout (cKO) mice and their litter-matched WT controls as described (Patriarchi et al., 2016; White et al., 2008), and 8–12 week old Sprague Dawley rats (Harlan). CaV1.2 cKO mice of neuron-specific deletion and their wild-type littermates were on a C57BL/6NTac:129SvEv F2 genetic background. Mice with a floxed CaV1.2 exon two allele (CaV1.2 f/+ or CaV1.2 f/f) and maintained on a 129SvEv genetic background were first bred to transgenic mice expressing the Cre recombinase regulated by the synapsin 1 promoter (Syn1-CreCre/+) and maintained on a C57BL/6NTac background (Cui et al., 2008; Zhu et al., 2001), producing an F1 cross. Using non-littermate offspring from the F1 cross, heterozygous floxed, cre-positive (CaV1.2 f/+; Syn1-CreCre/+) mice were then crossed with heterozygous floxed, cre-negative (CaV1.2 f/+; Syn1-Cre+/+) mice to produce homozygous floxed, Cre-positive (CaV1.2 f/f; Syn1-CreCre/+) conditional knockout mice as well as wild-type mice (CaV1.2 +/+; Syn1-Cre+/+). All animals were housed by the Animal Care Unit in Tupper Hall at UC Davis. This facility is fully approved for NIH-funded research and accredited by the Association for Assessment and Accreditation of Laboratory Animal Care. It maintains animals in a highly controlled environment optimized for the comfort of rodents in accordance with the applicable portions of the Animal Welfare Act and the DHS “Guide to the Care and Use of Animals.” Its NIH Office of Laboratory Animal Welfare Assurance Number is A3433-01. All efforts were made to ameliorate any potential suffering of animals. Specifically, animals were anesthetized with 5% isoflurane for 2–3 minutes in a two-chamber drop jar before decapitation and collection of tissue. This procedure followed NIH guidelines and was approved by the Institutional Animal Care and Use Committees at the University of California at Davis.\n\nResidue numbers correspond to the initial α11.2 sequence from rabbit heart (Gene Bank Accession number: CAA33546).\n\nThe polyclonal antibody CNC1 was produced against the synthetic peptide (KY)TTKINMDDLQPSENEDKS, covering residues 818 to 835 within the intracellular loop between domains II and III of α11.2 (Dubel et al., 1992). The peptide was coupled to bovine serum albumin in the laboratory of W. A. Catterall (University of Washington, WA, USA) and used to immunize rabbits (Hell et al., 1993b). Before use, the antibody was affinity purified on the same peptide cross-linked to Sepharose 4B-CL (for validation and characterization of CNC1 see Davare et al., 1999; Hall et al., 2013; Hell et al., 1993a; Hell et al., 1993b). The lysine and tyrosine residues at the N-terminus had been added for cross-linking and labeling purposes.\n\nThe polyclonal antibody ACC-003 was obtained from the company Alomone Labs (catalog number ACC-003, batch number ACC003AN4725; Jerusalem, Israel). It was produced in rabbit against the synthetic peptide (C)TTKINMDDLQPSENEDKS, which like CNC1, covers residues 818 to 835 within the intracellular loop between domains II and III of α11.2. The cysteine at the N-terminus is not part of the original α11.2 sequence but had presumably been added for cross-linking purposes. The batch of this antibody we received was characterized in Figure 2.\n\nThe polyclonal antibody FP1 was produced against an N-terminal GST fusion protein covering residues 783 to 845 within the same intracellular loop between domains II and III of α11.2 as CNC1. The affinity purified GST fusion protein was used to immunize rabbits in the laboratory of J. W. Hell (University of Wisconsin, WI, USA). Before use, the antibody was affinity purified on the same GST fusion protein cross-linked to glutathione Sepharose (for validation and characterization see Davare et al., 2001; Davare et al., 2000; Hall et al., 2013; Hall et al., 2007; Hall et al., 2006).\n\nThe polyclonal antibody CNC2 antibody was produced against the synthetic peptide (KY)GRGQSEEALPDSRSYVS covering residues 2122-2138 of α11.2, a region ~40 residues upstream of the very C terminus of α11.2 (Hell et al., 1993b). The peptide was coupled to bovine serum albumin in the laboratory of W. A. Catterall (University of Washington, WA, USA) and used to immunize rabbits (Hell et al., 1993b). Before use, the antibody was affinity purified on the same peptide cross-linked to Sepharose 4B-CL (for validation and characterization see Davare et al., 1999; Hall et al., 2013; Hell et al., 1996; Hell et al., 1993b; Hulme et al., 2006a). The lysine and tyrosine residues at the N-terminus had been added for cross-linking and labeling purposes.\n\nThe phosphospecific polyclonal antibody against pS1700 was produced against the synthetic peptide EIRRAIpSGDLTAEEEL (residues 1694-1713) (Fuller et al., 2010). The peptide was coupled to bovine serum albumin in the laboratory of W. A. Catterall (University of Washington, WA, USA) and used to immunize rabbits. Before use, the antibody was affinity purified on the same peptide cross-linked to Sepharose 4B-CL (for validation and characterization see Fuller et al., 2010; Murphy et al., 2014).\n\nThe phosphospecific polyclonal antibody against pS1928 was produced against the synthetic peptide LGRRApSFHLECLK (residues 1923-1932) (Davare et al., 1999). The peptide was coupled to bovine serum albumin in the laboratory of W. A. Catterall (University of Washington, WA, USA) and used to immunize rabbits. Before use, the antibody was affinity purified on the same peptide cross-linked to Sepharose 4B-CL (for validation and characterization see Davare & Hell, 2003; Davare et al., 2000; Hall et al., 2007; Hall et al., 2006).\n\nAll procedures were performed on ice. Instruments, including centrifuge rotors, tubes, tools, and buffers, were pre-cooled at 4°C or on ice to minimize post-mortem proteolysis (Hell et al., 1993a; Hell et al., 1993b; Westenbroek et al., 1992). Whole mouse brains and acute rat forebrain and cortical slices were extracted with 1% Triton X-100 in 150 mM NaCl, 10 mM EDTA, 10 mM EGTA, 10 mM Tris, pH 7.4 containing protease inhibitors (0.1 mM phenylmethylsulfonyl fluoride, 1 µM pepstatin A, 2 µM leupeptin, 4 µM aprotinin) and phosphatase inhibitors (2 µM microcystin LR, 1 mM p-nitrophenyl phosphate, 1 mM sodium pyrophosphate, 2.5 mM sodium fluoride). Extracts were cleared by 30 minutes centrifugation (250,000xg). The soluble fraction was incubated on a head-over-head tilter with protein A - Sepharose beads and 2 µg FP1 antibody for 4 h at 4° C and washed three times with 0.1% Triton X-100 in 150 mM NaCl, 10 mM EDTA, 10 mM EGTA, 10 mM Tris, pH 7.4. Immunoprecipitated Cav1.2 underwent SDS-PAGE in gels with a stacking phase polymerized from 3.5% acrylamide and a separating phase polymerized from 5, 7, 9, 11, or 13% acrylamide. Protein was transferred to polyvinylidene fluoride (PVDF) membranes at 50 V for 600 minutes for subsequent probing as previously described (Davare et al., 1999; Hell et al., 1993a; Hell et al., 1993b). Briefly, membranes were blocked in 10% milk, incubated in affinity-purified primary antibody (FP1 1:800, CNC1 1:200, CNC2 1:50, anti-pS1700 1:400, anti-pS1928 1:100, ACC-003 1:400) for 2 hours, washed, incubated in horseradish peroxidase (HRP)-labeled Protein A for 1 hour, washed, and developed on autoradiography film using chemiluminescence.\n\nForebrain slices were prepared from rat brain, then non-cortical regions trimmed when indicated to obtain cortical slices, equilibrated in oxygenated (95% O2, 5% CO2) artificial cerebral spinal fluid (ACSF: 119 mM NaCl, 26 mM NaHCO3, 1.25 mM NaH2PO4, 2.5 mM KCl, 1 mM MgSO4, 2.2 mM CaCl2, 15 mM glucose, 1 mM myo-inositol, 2 mM Na-pyruvate, 0.4 mM ascorbic acid) at 32°C for 1 h, and labeled at 4°C for 45 min in 2 ml ACSF containing 1 mg/ml Sulfo-NHS-SS-biotin (Pierce). Oxygenation of all slices was maintained throughout the entirety of the experiment for slice equilibration, biotinylation, quenching and lysis procedures. Excess Sulfo-NHS-SS-biotin was quenched by washing slices four times with ice-cold ACSF buffer containing 100 mM glycine. Cells were homogenized on ice with 50 mM Tris-Cl pH 7.4, 150 mM NaCl, 10 mM EGTA, 10 mM EDTA, 1% NP-40, 10% Glycerol, 0.05% SDS, 0.4% DOC containing protease and phosphatase inhibitors and insoluble material removed by centrifugation (10,000 xg, 20 min). Biotinylated constituents in lysate, each containing 300 μg of protein, were affinity-purified by incubation with 30 µl of NeutrAvidin-conjugated Sepharose beads (Thermo-Fisher) for 3 h at 4°C. Following four ice-cold washes of bead-bound material with 1% Triton X-100, 150 mM NaCl, 10 mM Tris-Cl, 10 mM EDTA, 10 mM EGTA, immobilized proteins were eluted by treatment with SDS sample buffer, separated by SDS-PAGE (8% resolving gel), and transferred to PVDF before immunoblotting as above.\n\n\nResults\n\nTo identify the main size variants of brain α11.2, we performed immunoblotting with three different antibodies made against the loop between domains II and III as well as three different antibodies raised against other various parts of the C-terminus of α11.2 (Davare et al., 2000; Hell et al., 1993a; Hell et al., 1993b) (Figure 1). FP1, CNC1, and the commercial antibody ACC-003 were raised against peptides covering middle portions of the II/III loop of α11.2. The anti-phospho-S1700 antibody (pS1700) was produced against the respective phosphopeptide covering residues 1694-1713 in the C-terminus, the anti-phospho-S1928 antibody (pS1928) against the respective phosphopeptide covering residues 1923-1932, and CNC2 against residues 2122-2138 near the very C-terminus of α11.2 (Figure 1).\n\nShown is a schematic of the Cav1.2 α11.2 subunit, in which regions used as immunogens for the depicted antibodies are identified by arrows. Exact residues are listed in the table and numbered according to α11.2 given in Gene Bank Accession number CAA33546. FP1, CNC1, and ACC-003 are directed against the loop between domains II and III, pS1700 against phosphorylated S1700, pS1928 against phosphorylated S1928, and CNC2 against residues 2122-2138 of α11.2, which are ~40 residues upstream of the very C terminus of α11.2.\n\nWe tested whether immunoreactive bands recognized by these antibodies correspond to α11.2 size forms using brain extracts from WT and α11.2 KO mice. Total KO of α11.2 is embryonically lethal due to the central role of Cav1.2 triggering heart beat (Seisenberger et al., 2000). Thus we used tissue from conditional α11.2 KO mice (cKO) in which the floxed α11.2 gene was excised by Cre recombinase, whose expression was driven by the synapsin I promoter, resulting in a pan neuronal deletion throughout the brain (Cui et al., 2008; Zhu et al., 2001). We extracted whole mouse brain with 1% Triton X-100 (solubilizing >90% of total Cav1.2) and used the extracts directly for immunoblotting (Figure 2A). FP1 detected clear, strong bands of apparent MR of ~150, 210, and 250 kDa in WT mice. As expected for antibodies with immunoreactivity to α11.2, these 210 and 250 kDa bands were not readily detectable when cKO brain tissue was probed with FP1. Accordingly, these bands constitute bona fide α11.2 size forms. In contrast, the 150 kDa band was not only prominent in WT samples but also highly expressed in cKO brain, suggesting that this band does not correspond to α11.2 sequences. This conclusion is further supported when similar blots were probed with CNC1, which only recognized bands of 210 and 250 kDa in WT brain, both of which were undetectable in immunoblot lanes containing lysate from cKO mice. The ACC-003 antibody, a commercial antibody designed against the same epitope, recognized similar 210 and 250 KDa bands present in WT but not cKO brains, which is again consistent with these bands representing true major α11.2 size forms. However, this antibody detected additional immunoreactive bands of ~130 and ~190 kDa that were of equal strength in brain lysates from both WT and cKO mice, indicating that these two bands are not true isoforms of α11.2.\n\n(A) Immunoblots of Triton X-100 extracts from conditional α11.2 KO mice (KO) and litter matched WT mice using gels polymerized from 8% acrylamide. To ensure that there was no spill-over between lanes, in some gels one or more lanes were left empty as shown here for the middle lane labeled E in the right FP1 blot. To fully resolve α11.2 short and long forms, the 100 kDa marker was run close to the bottom except in the right panel. In this experiment, electrophoresis of the same extracts used for α11.2 immunoblotting was terminated before the dye front reached the bottom. Probing for β-actin showed that comparable amounts of protein were present in each extract from the different WT and KI mice. (B) Cav1.2 was immunoprecipitated from brain extracts from conditional KO and WT mice with the FP1 antibody before SDS-PAGE in gels polymerized from 6% acrylamide and immunoblotting with the indicated antibodies. To fully separate α11.2 short and long forms, electrophoresis was performed until the 100 kDa marker was near the bottoms of the gels. For all antibodies, the ~210 and 250 kDa bands were nearly or completely absent in cKO samples.\n\nFor increased sensitivity and to further define the identity of the 150 kDa band detected in FP1 blots and the 130 and 190 kDa bands recognized by ACC-003, we performed immunoprecipitation to concentrate the α11.2 isoforms from a much larger volume of lysate. The FP1 antibody (of which we have a significantly larger supply than of the other antibodies) was used to immunoprecipitate α11.2 from Triton X-100 brain extracts. The resulting concentrate was then subjected to individual immunoblot analysis using the six distinct α11.2 antibodies available. Remarkably, probing with FP1 only revealed a 210 and a 250 kDa band but not the 150 kDa band (Figure 2B). Apparently this 150 kDa band detected by FP1 immunoblot of directly loaded brain extracts is not readily immunoprecipitated by FP1. This observation further suggests that the 210 and 250 kDa bands are immunologically different from the 150 kDa band, with the 210 and 250 kDa proteins but not the 150 kDa protein being efficiently immunoprecipitated. Moreover as with FP1, the CNC1, ACC-003, and pS1700 antibodies all recognized bands of 210 and 250 kDa in FP1 WT brain immunoprecipitates, whereas the more C-terminal directed pS1928 and CNC2 antibodies recognized only a single band of 250 kDa (Figure 2B). FP1, CNC1, ACC-003, and pS1700 immunoblotting did, as expected, reveal faintly reactive 210 and 250 kDa bands after FP1 immunoprecipitation from cKO brains. These weakly immunoreactive bands are the result of the continued α11.2 expression in non-neuronal tissue (glia and vasculature). Importantly, the 130 and 190 kDa bands recognized by ACC-003 in brain lysate of WT and cKO mice were not detectable after the FP1 immunoprecipitation. Similar to our observation that the 150 kDa band detected by FP1 probings of directly loaded brain lysates is not detected in blots of FP1 immunoprecipitates, this finding further indicates that the 130 and 190 kDa bands are not α11.2 isoforms.\n\nNot all proteins, including MR markers, consistently migrate at the same apparent molecular mass during SDS-PAGE. It is conceivable that a protein of a true MR of 150 kDa could run with an apparent MR of 200 kDa and more. To increase certainty about the MR of the apparent 210 and 250 kDa bands detected in the above experiments and scrutinize whether the apparent 210 kDa band might under different conditions migrate near a 150 kDa marker, α11.2 migration relative to two different MR marker sets was analyzed in gels made from different concentrations of acrylamide (5–11%). For this analysis, Cav1.2 was enriched by immunoprecipitation with FP1. The individual marker proteins in the two different MR marker sets migrated uniformly and as expected for their molecular mass. Here, all five of the tested α11.2 antibodies recognized a protein band that migrated with the 250 kDa size markers in 5% gels and slightly slower than the 250 kDa markers in all other % acrylamide gels (Figure 3). The two loop antibodies FP1 and CNC1, as well as pS1700, but not pS1928 nor CNC2, recognized a second band that migrated either between the 150 and 250 kDa markers in 5% acrylamide gels or just below the 250 kDa markers in 7% gels, or co-migrated with the larger size form in 9, 11, and 13% gels. The pS1928 and CNC2 antibodies only detected the long form in brain extracts while the pS1700 antibody recognized both size forms, a pattern indicating that the shorter form represents an α11.2 size variant that is truncated, relative to full length, between residues 1700 and 1928. This notion is consistent with a size difference between the long and short forms of roughly 30–60 kDa and is also in agreement with the observed migration for the lower FP1-, CNC1-, and pS1700 immunoreactive band in 5–7% gels (the phospho-serine 1700 being 471 residues upstream of the distal C-terminus of full length α11.2).\n\nCav1.2 was immunoprecipitated from mouse brain extracts (Triton X-100) with the FP1 antibody against α11.2 before fractionation by SDS-PAGE in gels polymerized from 5, 7, 9, 11, and 13% acrylamide followed by immunoblotting with the indicated antibodies. Two different prestained marker protein sets were used to estimate MR.\n\nIn some cases, a faint immunoreactive band with an apparent MR of ~130 kDa in 5% gels and ~150 kDa in 7%, 11% and 13% gels was observed by immunoblotting with CNC1 and FP1. Figure 3 shows the clearest examples among all our immunoblots for detection of this weak band by CNC1 and FP1. However, in the majority of experiments a similar sized band was not detectable.\n\nGiven the anomalous migration of the short form, we wanted to provide further evidence for the estimation of a 30-60 kDa difference between the two size forms. α11.2 was expressed in HEK293 cells as either its full length form or as a shortened version truncated at residues 1800 (α11.2Δ1800) before extraction, immunoprecipitation and separation by 7% SDS-PAGE. As with the mouse brain lysate samples, FP1 and pS1700 detected full length and truncated α11.2 with an apparent MR of about 250 and 210 kDa, respectively, whereas the pS1928 antibody only identified the full length α11.2 (Figure 4A).\n\nHEK293T cells were transfected with full length or truncated (Δ1800) α11.2 plus α2δ1 and β2a. HEK293T cells and rat and mouse brain slices were extracted with 1% Triton X-100 before immunoprecipitation of α11.2, SDS-PAGE in gels polymerized from 8% acrylamide, and immunoblotting with the indicated antibodies. (A) The full length form of α11.2 expressed in HEK293 cells migrated with an apparent MR of 250 kDa and is detected by FP1, pS1700 and pS1928. Truncated Δ1800 α11.2 migrated with an apparent MR of 210 kDa and is detected by FP1 and pS1700 but not pS1928. (B) The α11.2 short and long form appear only partially resolved because the weak α11.2 signals in HEK293 cell samples required long exposure times. The upper band as detected by CNC1 after FP1 immunoprecipitation from rat and mouse forebrain slices and cortical slices co-migrated with the full length form of α11.2 expressed in HEK293 cells, while the lower band co-migrated with the truncated Δ1800 α11.2 expressed in HEK293 cells.\n\nAdditional experiments were performed with rat tissue to look for potential differences in proteolytic processing between mouse and rat α11.2. We extracted forebrain slices and cortical slices from both mouse and rat for immunoprecipitation with FP1 and separation by SDS-PAGE, matching the 8% acrylamide gel conditions used in (Michailidis et al., 2014). As expected from our earlier analysis in 7 and 9% acrylamide gels, the α11.2 short form was partially separated from the long form in the 8% acrylamide gel (Figure 4B). Importantly, the long and short forms from the rodent brain tissues co-migrated with the corresponding full length α11.2 and α11.2Δ1800 ectopically expressed in HEK293 cells. Accordingly, truncation of the long form at approximately residue 1800 is most likely what gives rise to the main α11.2 short form in rodent brain. Moreover, these experiments did not reveal a protein band isolated from rat brain lysates that could conceivably correspond to a 150 kDa size form of α11.2, and only a very weak band of ~150 kDa could be detected in the mouse samples.\n\nTo test whether pull-down of surface biotinylated proteins might enrich for a unique α11.2 population at the cell surface and thereby unmask a size form smaller than 200 kDa, we performed surface biotinylation of acute slices using acute slices made from both total rat brain and cortex before extraction. We then carried out neutravidin-Sepharose pulldown and immunoblotting as described earlier (Michailidis et al., 2014). In agreement with our findings above, CNC1 and FP1 immunoblotting of proteins in neutravidin-Sepharose pulldowns and total lysate loads separated by 8% PAGE revealed major partially separated bands at ~200–250 kDa and no evidence of a 150 kDa band (Figure 5). On some immunoblots a weak band within the 90 kDa range was detectable by CNC1 (Figure 5B). Neutravidin pulldowns of unbiotinylated control samples did not yield detectable immunoblot signals, verifying the specificity of the biotin-neutravidin pulldown assay.\n\nCortical and forebrain slices were surface biotinylated and solubilized before pulldown with NeutrAvidin Sepharose, SDS-PAGE in 8% acrylamide gels, and immunoblotting with CNC1 and FP1. Control reflects slices mock treated without Sulfo-NHS-SS-biotin to demonstrate specificity of pulldown. Twenty μL lysate was also directly loaded for comparison.\n\nBecause we observed in some experiments a weak ~150 kDa band in FP1 immunoprecipitates that were immunoblotted with FP1 and CNC1 (Figure 3), we wanted to clarify whether this band is related to the strong 150 kDa band detected with FP1 in brain lysate of WT and cKO mice. We ran in parallel forebrain extracts and FP1 immunoprecipitates on the same gel (Figure 6). As before (Figure 2), CNC1 did not detect a 150 kDa band in lysate lanes (Figure 6A) even when blots were exposed to film for longer time periods (Figure 6B). However upon prolonged film exposure CNC1 probed blots reveal a faint 150 kDa band in lanes for the FP1 immunoprecipitated samples isolated from WT mice. Extended film exposure also revealed a weak 150 kDa band detected by FP1 after immunoprecipitation with FP1 (Figure 6C). Because the faint band in FP1 immunoprecipitates is equally well detected by FP1 and CNC1 but the strong 150 kDa band seen with FP1 in lysate is only detected by FP1, the two ~ 150kDa bands are most likely not related to one another but rather represent different protein species. If these ~150kDa bands were the same protein the CNC1 antibody should detect the strong 150 kDa band in lysate as well. Finally, only a faint 150 kDa band was also detected by the ACC-003 antibody probe upon extended exposure of the blot to film (Figure 6E).\n\nImmunoblots with CNC1 (A,B), FP1 (C), and ACC-003 (D,E) of Triton X-100 extracts from WT mice (lysate) and after immunoprecipitation with FP1 from cKO and WT mice. Gels were polymerized from 8% acrylamide. Note that a weak 150 kDa band is detected by CNC1, FP1, and ACC-003 after enrichment of α11.2 by immunoprecipitation with FP1 but the strongly immunoreactive 150 kDa band detected by FP1 in lysate is not detectable by either CNC1 or ACC-003.\n\n\nDiscussion\n\nOur extensive and detailed biochemical analysis of α11.2 size forms was inspired by recent work that suggested that surface α11.2 is cleaved to a large degree between domains II and III (Michailidis et al., 2014). The main evidence for the midchannel proteolysis proposed in this publication was based on immunoblotting:\n\n1. The anti-LII-III antibody (ACC-003 from Alomone), detected two main bands that migrated with apparent MR values of ~150 and ~250 kDa (like CNC1, ACC-003 was made against α11.2 residues 818-835 in the loop between domains II and III);\n\n2. An antibody produced against residues 2127-2143 near the very C-terminus of α11.2 (anti-LCt) recognized ~9 bands of varying intensities, one of which exhibited intermediate labeling intensity at an apparent MR of ~150 kDa;\n\n3. An antibody against the N-terminus of α11.2 (anti-LNt) detected ~8 bands of varying intensities, including one of strong intensity that migrated with an apparent MR of ~90 kDa.\n\nThese observations are consistent with the possibility that cleavage could occur just N-terminal to the recognition site of ACC-003 / anti-LII-III inside the loop between domains II and III (Michailidis et al., 2014). The 250 kDa fragment recognized by ACC-003 / anti-LII-III would reflect the full length channel and the 150 kDa fragment recognized by ACC-003 / anti-LII-III would represent a fragment that comprises most of loop II/III, domains III and IV, and the full length C-terminus. The 90 kDa band detected with the N-terminal antibody would be the other cleavage product of the proposed midchannel cleavage and the anti-LCtrecognized 150 kDa band would be the remaining C-terminal cleavage product. However, it remains untested and unclear whether the N- and C-terminal antibodies in this work did indeed recognize their intended target and which among the many bands detected by these antibodies were truly α11.2, and not cross reactive proteins. Moreover, the 150 kDa band recognized by the anti-LCt antibody was a minor fraction of all the many bands detected by the anti-LCt antibody whereas the 150 kDa band recognized by the ACC-003 / anti-LII-III antibody was one of two major bands detected by the anti-LII-III antibody, making it unlikely that those two 150 kDa bands originated from the same protein.\n\nOne potential explanation for the detection of an apparent 150 kDa form of α11.2 (Michailidis et al., 2014) is that the full length α11.2 form and the C-terminally truncated form that we identify as 210 kDa in size were well separated (as in our 5% gels) but the 150 kDa MR marker ran slower in their experiments than expected, which is possible for pre-stained markers. It is also possible that the 210 kDa form ran faster than anticipated or that a combination of both occurred. These effects would result in an apparent MR value of our 210 kDa band that is less than the actual MR. Consistent with this possibility, the N-terminal antibody used in the previous work (Michailidis et al., 2014) recognized in addition to the 90 kDa band a 150 kDa band, which could be an overly fast migrating 210 kDa polypeptide. Importantly, by demonstrating precise co-migration of the short form with α11.2Δ1800 ectopically expressed in HEK293 cells, we ruled out the possibility that the short α11.2 form we identified with an apparent MR of ~210 kDa is actually a significantly smaller fragment (potentially with an MR of 150 kDa) that ran slower than would be expected for a polypeptide with an MR substantially below 210 kDa (Figure 4). Thus, the short α11.2 form we observed following isolation from rodent brains lacks ~371 C-terminal residues of full length α11.2, as is the case for α11.2Δ1800.\n\nBased on our analysis of cKO brain extracts, the most likely explanation is that the earlier 150 kDa band detected by Michailidis et al. (Michailidis et al., 2014) was not a significant α11.2 isoform but rather a different protein recognized by the ACC-003 / anti-LII-III loop antibody. In fact, in addition to the 210 and 250 kDa bands seen only in α11.2 WT tissue and thereby reflecting major α11.2 size forms, the ACC-003 antibody we obtained from Alomone Labs did recognize an ~130 and an ~190 kDa band, which were present not only in α11.2 WT but also cKO mice. Similarly, another recent report indicates that the ACC-003 used in that work detected a band of ~130 kDa that was equally present in α11.2 WT and cKO tissue when the 250 kDa band was only present in WT but not cKO tissue (Bavley et al., 2017). It is unclear whether the 150 kDa band recognized by the ACC-003 / anti-LII-III antibody in the earlier work (Michailidis et al., 2014) corresponds to the 130 kDa band we detect with the ACC-003 antibody. This explanation is quite conceivable as migration behavior of native proteins (and even MR markers) can easily vary between gel systems, as we showed for the ~210 kDa α11.2 size form in Figure 2 and discussed in the preceding paragraph. Alternatively, cross-reactivity of antibodies with proteins other than α11.2 could be different for the ACC-003 / anti-LII-III antibody batch used more than 2 years ago (Michailidis et al., 2014) and the ACC-003 antibody we received in 2016 from Alomone Labs. Such differences could be due to different immune system responses within the individual rabbits used for immunization at different times. This possibility would also explain why the ACC-003 antibody we obtained from Alomone Labs recognized a cross-reacting 190 kDa band when the earlier ACC-003 / anti-LII-III antibody did not (Michailidis et al., 2014).\n\nIn further support of the notion that antibodies against peptides derived from the LII-III loop of α11.2 can cross react with other proteins, our FP1 antibody recognizes a 150 kDa band of equal strength in extracts from WT and cKO brains, whereas the 210 and 250 kDa bands are strong in WT extracts and very faint in cKO extracts, the latter reflecting α11.2 expression in non-neuronal tissue and cells (Figure 2A). FP1 was made against a polypeptide spanning residues 783-845, which includes all of the residues of the synthetic peptide used to make the ACC-003 / anti-LII-III antibody, as well as our CNC1 antibody (residues 821-838). Perhaps, the 821-838 segment mimics not only the α11.2 epitope but also to some degree, though not perfectly, a related epitope on another protein that is present in WT and cKO mice. In fact, another antiserum that was produced completely independent from our CNC1 antibody but utilized the very same α11.2 peptide sequence also detected an ~150 kDa band of similar intensity in brain lysates from WT and cKO mice (Tippens et al., 2008). Of note, the cKO mice used by Tippens et al. are different from the cKO used by us indicating that the strong 150 kDa band is present in mice of several different genetic backgrounds. Concordant with this idea, neither the ACC-003 antibody that we received from Alomone Labs that recognized a 130 kDa band in WT and cKO brains nor our CNC1 antibody recognized the strong 150 kDa band seen with FP1 in brain lysate. This finding is, once more, likely due to variability in immune responses of the individual rabbits to the immunogen, which at times but not always gives rise to antibodies against this unknown 150 kDa protein.\n\nThe results of our rigorous testing and validation of the antibodies used herein (see also Hall et al., 2013) boosts our confidence that the ~250 kDa protein detected by all six antibodies and the ~210 kDa protein detected by the four antibodies that recognize epitopes upstream of residue 1800 are two different size forms of α11.2. In contrast, the ~130 and ~150 kDa bands detected with ACC-003 / anti-LII-III and FP1, respectively, are most likely not related to α11.2 as these bands persist in α11.2 cKO tissue. Overall, the evidence is overwhelming that the prominent bands in the 130-150 kDa range detected by the various anti-α11.2 antibodies represent proteins that are different from α11.2.\n\nTheoretically, it is also possible that the 150 kDa protein species arose because of nonspecific post mortem proteolysis. Because we use a strong and well-defined cocktail of inhibitors against serine, cysteine, and metalloproteases (Hell et al., 1993a; Hell et al., 1993b; Westenbroek et al., 1992) (see Material and Methods) and are particularly careful to keep all samples cold, such proteolysis may not have occurred to a significant degree in our hands. Accordingly, we only detected at best a very weak band migrating with an apparent MR of 150 kDa with the four different antibodies that recognized also full length α11.2. Under less stringent conditions, greater proteolysis might occur post mortem during tissue extraction, biotinylation, and purification. We tested whether incubation of forebrain slices at room temperature without O2 supply for 10 and 20 min would trigger proteolytic processing that results in a 150 kDa α11.2 band. However, in several different experiments we did not observe any increase in the weak 150 kDa band that is detected by either FP1 or CNC1 after enrichment of Cav1.2 by immunoprecipitation with FP1 (data not shown). Thus it appears unlikely that any 150 kDa band is due to post mortem proteolytic processing of α11.2.\n\nIf the 150 kDa band reported in the previous work (Michailidis et al., 2014) does not correspond to the 210 kDa fragment of α11.2 that arises via cleavage in the middle of the C-terminus, why then did Michailidis et al. not observe a doublet of 210 and 250 kDa in their hands with the ACC-003 / anti-LII-III antibody? Perhaps differences in SDS-PAGE procedures might result in the 250 and 210 kDa size forms not being separated at all during their analysis and instead appear to migrate as one band at 250 kDa, analogous to our finding that the two size forms co-migrate as a single band in 11 and 13% gels. This is possible even in 8% gels as the electrophoresis period applied by Michailidis et al. was most likely shorter than in our hands. For the analysis of α11.2 we reported here, the gel electrophoresis was extended to the point that the 60 kDa marker ran off the gel. Even with this protocol we see only partial separation of the 210 and 250 kDa forms of α11.2 in our 9% gels (Figure 4, Figure 5). With shorter running times, little to no separation is expected in 8% gels.\n\nImmunocytochemical image analysis of ectopically expressed α11.2 that carries a GFP tag at its cytosolic N-terminus and an HA tag in one of the extracellular loops of domain III for anti-HA antibody labeling of surface expressed CaV1.2 was also used in the attempt to identify midchannel cleavage (Michailidis et al., 2014). The existence of clusters that only show GFP fluorescence is consistent with a substantial fraction of α11.2 being intracellular where HA labeling is absent. The existence of often very large HA-immunoreactive red clusters lacking GFP signals was interpreted as evidence for separation of GFP and HA tags by proteolysis. If so, channel halves would completely dissociate and not remain close to each other as would be required for a channel to function with modified current conductance. Accordingly, the potential for separate N- and C-terminal portions of α11.2 to form functional channels, as characterized by Michailidis et al. (2014), would either not be relevant in intact neurons if all of the cleaved channels dissociate or only apply to a small subpopulation of α11.2; however the degree and function of spatial separation of N- and C-terminal α11.2 fragments remains unclear.\n\nAlternatively, rather than reflecting channel cleavage, the lack of detection of GFP signals in the HA-immunoreactive red clusters might be related to image acquisition or analysis. It is possible that the ectopically expressed α11.2, together with the GFP tag signal, is much higher inside dendritic shafts than at their surfaces, resulting in a steep gradient toward the periphery. If so, when images are taken so that the GFP signal in the center of the shaft is in the dynamic range (i.e., fairly strong but not saturated), the peripheral signal would be much weaker. This appears to be the case in Figure 2B and 2D in the preceding work (Michailidis et al., 2014), where GFP seems to be mostly in the center of the dendrite and HA, as expected for surface labeling, at the periphery while sparing the center. In this manner, GFP could appear weak or absent in peripheral areas of dendrites where HA is mostly localized due to the surface labeling for HA. Such a scenario would provide one potential explanation for surface areas showing strong HA and weak GFP signals where actually a sizable fraction of uncleaved α11.2 corresponding to the amount of HA signal might be present with the GFP signal at the surface appearing weak due to strong intracellular GFP signal.\n\nFigure 2C in the Michailidis study (Michailidis et al., 2014) illustrates another potential scenario for dissociation of HA and GFP signals. This figure shows a long segment of the dendritic shaft that exhibits mostly HA and little if any GFP signal in. Even mild paraformaldehyde fixation can lead to permeabilization of 5–20 μm long segments of the dendritic plasma membrane and thereby expose sub-plasma membrane epitopes (Taylor & Fallon, 2006; Watschinger et al., 2008) (Matt and Hell, data not shown). Thus, it is conceivable that strong HA staining in this figure is paired with relatively strong suppression of GFP fluorescence in that segment as paraformaldehyde, which quenches GFP fluorescence, might have had preferential access to this region compared to the regions between the 0-2 and 12-14 μm marks where the GFP signal is much stronger. The strong HA staining in this dendritic segment could be surface labeling or intracellular HA staining of some sort of Cav1.2 clusters (perhaps reflecting a secretory compartment) due to antibody access induced by paraformaldehyde.\n\nEvidence for the notion that HA staining likely yields much larger signals than GFP especially after fixation with paraformaldehyde is present in Figure 2D in this previous report (Michailidis et al., 2014). Here, protrusions are more strongly labeled by anti-HA staining than by GFP and shaft diameter appears much wider for HA than for GFP; these observations hint that the Cav1.2-GFP signal in or near the plasma membrane is rather weak and largely from intracellular Cav1.2. In this respect, it is surprising that there would be rather long segments of dendritic shaft that contain mostly HA and little if any GFP signal (as in Figure 2C of this publication).\n\nWe provide strong and clear evidence that the primary and major neuronal size forms of the α11.2 subunit of CaV1.2 are ~210 and 250 kDa in molecular mass. Based on detection of only a weak 150 kDa band by CNC1, ACC-003, and FP1 immunoblotting after immunoprecipitation with FP1 (Figure 3 and Figure 6), it appears that a very small fraction of α11.2 can be cleaved into 150 and 90 kDa fragments, which may remain to some degree associated with each other to form L-type channels of modified biophysical properties; however the prevalence of such proteolytic processing is certainly low (≤1%). It remains unclear what effects any limited mid-channel processing would have on overall L-type channel activity in neurons. It is possible that such processing of α11.2 is more prominent in certain cell types or subcellular regions and could in fact lead to the change in channel properties described by Michailidis et al. Determining where and under what condition(s) changes might occur will further rouse interesting questions for the future.\n\n\nData availability\n\nDataset 1: Raw data supporting the findings presented in this study. The raw data shows full size film images of probed membranes. Full size membranes resulting from transfer of full size gels were often vertically cut to separate replicate sets of samples typically separated by MR markers for simultaneous probing of the different membrane fragments with different antibodies. For optimal resolution of the α11.2 long and short forms, which exhibit high MR, gels were run until the 60 kDa MR marker was either close to the very bottom of the gel or completely run off.\n\nRaw data for Figure 2. Determination of antibody specificity for α11.2 with conditional α11.2 KO mice.\n\nOriginal source images for Figure 2:\n\n(A) Immunoblots of Triton X-100 extracts from conditional α11.2 KO mice (KO) and litter matched WT mice using gels polymerized from 8% acrylamide. To ensure that there was no spill-over between lanes, in some gels one or more lanes were left empty as shown here for the middle lane labeled E in the right FP1 blot. To fully resolve α11.2 short and long forms, the 100 kDa marker was run close to the bottom except in the right panel. In this experiment, electrophoresis of the same extracts used for α11.2 immunoblotting was terminated before the dye front reached the bottom. Probing for β-actin showed that comparable amounts of protein were present in each extract from the different WT and KI mice.\n\n(B) Cav1.2 was immunoprecipitated from brain extracts from conditional KO and WT mice with the FP1 antibody before SDS-PAGE in gels polymerized from 6% acrylamide and immunoblotting with the indicated antibodies. To fully separate α11.2 short and long forms, electrophoresis was performed until the 100 kDa marker was near the bottoms of the gels. For all antibodies, the ~210 and 250 kDa bands were nearly or completely absent in cKO samples.\n\nRaw data for Figure 3. Analysis of α11.2 size forms by SDS-PAGE with increasing acrylamide concentrations.\n\nOriginal source images for Figure 3: Cav1.2 was immunoprecipitated from mouse brain extracts (Triton X-100) with the FP1 antibody against α11.2 before fractionation by SDS-PAGE in gels polymerized from 5, 7, 9, 11, and 13% acrylamide followed by immunoblotting with the indicated antibodies. Two different prestained marker protein sets were used to estimate MR.\n\nRaw data for Figure 4. Mouse and rat α11.2 short forms co-migrate with α11.2 truncated at residue 1800 in the middle of the c-terminus.\n\nOriginal source images for Figure 4: HEK293T cells were transfected with full length or truncated (Δ1800) α11.2 plus α2δ1 and β2a. HEK293T cells and rat and mouse brain slices were extracted with 1% Triton X-100 before immunoprecipitation of α11.2, SDS-PAGE in gels polymerized from 8% acrylamide, and immunoblotting with the indicated antibodies.\n\n(A) The full length form of α11.2 expressed in HEK293 cells migrated with an apparent MR of 250 kDa and is detected by FP1, pS1700 and pS1928. Truncated Δ1800 α11.2 migrated with an apparent MR of 210 kDa and is detected by FP1 and pS1700 but not pS1928.\n\n(B) The α11.2 short and long form appear only partially resolved because the weak α11.2 signals in HEK293 cell samples required long exposure times. The upper band as detected by CNC1 after FP1 immunoprecipitation from rat and mouse forebrain slices and cortical slices co-migrated with the full length form of α11.2 expressed in HEK293 cells, while the lower band co-migrated with the truncated Δ1800 α11.2 expressed in HEK293 cells. Sometimes, as seen here, a significant portion of the pore-forming subunit aggregated at the interface between stacking and resolving gels. This unresolved fraction (thick arrow) is not representative of its true molecular mass and not shown in the main figures.\n\nRaw data for Figure 5. Surface biotinylation labels α11.2 size forms with apparent MR > 200 kDa in rat cortical and forebrain slices.\n\nOriginal source images for Figure 5: Cortical and forebrain slices were surface biotinylated and solubilized before pulldown with NeutrAvidin Sepharose, SDS-PAGE in 8% acrylamide gels, and immunoblotting with CNC1 and FP1. Control reflects slices mock treated without Sulfo-NHS-SS-biotin to demonstrate specificity of pulldown. Twenty μL lysate was also directly loaded for comparison.\n\nRaw data for Figure 6. Differential recognition of the strong 150 kDa FP1 band in lysate and weak 150 kDa band by FP1, CNC1, and ACC-003 after IP of α11.2 with FP1.\n\nOriginal source images for Figure 6: Immunoblots with CNC1 (A,B), FP1 (C), and ACC-003 (D,E) of Triton X-100 extracts from WT mice (lysate) and after immunoprecipitation with FP1 from cKO and WT mice. Gels were polymerized from 8% acrylamide. Note that a weak 150 kDa band is detected by CNC1, FP1, and ACC-003 after enrichment of α11.2 by immunoprecipitation with FP1 but the strongly immunoreactive 150 kDa band detected by FP1 in lysate is not detectable by either CNC1 or ACC-003.\n\nDOI, 10.5256/f1000research.11808.d168808 (Buonarati et al., 2017)",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by NIH grants F31 NS086226 (ORB), T32GM099608 (PBH), AHA14PRE19900021 (PBH), R01AG052934 (GGM), R01 NS078792 (JWH), and R01 AG017502 (JWH).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nBavley CC, Fischer DK, Rizzo BK, et al.: Cav1.2 channels mediate persistent chronic stress-induced behavioral deficits that are associated with prefrontal cortex activation of the p25/Cdk5-glucocorticoid receptor pathway. Neurobiol Stress. 2017; 7: 27–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerkefeld H, Sailer CA, Bildl W, et al.: BKCaCav channel complexes mediate rapid and localized Ca2+-activated K+ signaling. Science. 2006; 314(5799): 615–620. PubMed Abstract | Publisher Full Text\n\nBernard PB, Castano AM, Bayer KU, et al.: Necessary, but not sufficient: insights into the mechanisms of mGluR mediated long-term depression from a rat model of early life seizures. Neuropharmacology. 2014; 84: 1–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBolshakov VY, Siegelbaum SA: Postsynaptic induction and presynaptic expression of hippocampal long-term depression. Science. 1994; 264(5162): 1148–52. PubMed Abstract | Publisher Full Text\n\nBoric K, Muñoz P, Gallagher M, et al.: Potential adaptive function for altered long-term potentiation mechanisms in aging hippocampus. J Neurosci. 2008; 28(32): 8034–8039. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBunemann M, Gerhardstein BL, Gao T, et al.: Functional regulation of L-type calcium channels via protein kinase A-mediated phosphorylation of the beta(2) subunit. J Biol Chem. 1999; 274(48): 33851–33854. PubMed Abstract | Publisher Full Text\n\nBuonarati OR, Henderson PB, Murphy GG, et al.: Dataset 1 in: Proteolytic processing of the L-type Ca2+ channel alpha11.2 subunit in neurons. F1000Research. 2017. Data Source\n\nCui Y, Costa RM, Murphy GG, et al.: Neurofibromin regulation of ERK signaling modulates GABA release and learning. Cell. 2008; 135(3): 549–560. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavare MA, Avdonin V, Hall DD, et al.: A beta2 adrenergic receptor signaling complex assembled with the Ca2+ channel Cav1.2. Science. 2001; 293(5527): 98–101. PubMed Abstract | Publisher Full Text\n\nDavare MA, Dong F, Rubin CS, et al.: The A-kinase anchor protein MAP2B and cAMP-dependent protein kinase are associated with class C L-type calcium channels in neurons. J Biol Chem. 1999; 274(42): 30280–30287. PubMed Abstract | Publisher Full Text\n\nDavare MA, Hell JW: Increased phosphorylation of the neuronal L-type Ca2+ channel Cav1.2 during aging. Proc Natl Acad Sci U S A. 2003; 100(26): 16018–16023. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavare MA, Horne MC, Hell JW: Protein phosphatase 2A is associated with class C L-type calcium channels (Cav1.2) and antagonizes channel phosphorylation by cAMP-dependent protein kinase. J Biol Chem. 2000; 275(50): 39710–39717. PubMed Abstract | Publisher Full Text\n\nDe Jongh KS, Murphy BJ, Colvin AA, et al.: Specific phosphorylation of a site in the full-length form of the alpha 1 subunit of the cardiac L-type calcium channel by adenosine 3',5'-cyclic monophosphate-dependent protein kinase. Biochemistry. 1996; 35(32): 10392–10402. PubMed Abstract | Publisher Full Text\n\nDolmetsch RE, Pajvani U, Fife K, et al.: Signaling to the nucleus by an L-type calcium channel-calmodulin complex through the MAP kinase pathway. Science. 2001; 294(5541): 333–339. PubMed Abstract | Publisher Full Text\n\nDubel SJ, Starr TV, Hell J, et al.: Molecular cloning of the alpha-1 subunit of an omega-conotoxin-sensitive calcium channel. Proc Natl Acad Sci U S A. 1992; 89(11): 5058–5062. PubMed Abstract | Free Full Text\n\nFuller MD, Emrick MA, Sadilek M, et al.: Molecular mechanism of calcium channel regulation in the fight-or-flight response. Sci Signal. 2010; 3(141): ra70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGao T, Bunemann M, Gerhardstein BL, et al.: Role of the C terminus of the alpha 1C (CaV1.2) subunit in membrane targeting of cardiac L-type calcium channels. J Biol Chem. 2000; 275(33): 25436–25444. PubMed Abstract | Publisher Full Text\n\nGao T, Cuadra AE, Ma H, et al.: C-terminal fragments of the alpha 1C (CaV1.2) subunit associate with and regulate L-type calcium channels containing C-terminal-truncated alpha 1C subunits. J Biol Chem. 2001; 276(24): 21089–21097. PubMed Abstract | Publisher Full Text\n\nGraef IA, Mermelstein PG, Stankunas K, et al.: L-type calcium channels and GSK-3 regulate the activity of NF-ATc4 in hippocampal neurons. Nature. 1999; 401(6754): 703–708. PubMed Abstract | Publisher Full Text\n\nGrover LM, Teyler TJ: Two components of long-term potentiation induced by different patterns of afferent activation. Nature. 1990; 347(6292): 477–479. PubMed Abstract | Publisher Full Text\n\nHall DD, Dai S, Tseng PY, et al.: Competition between α-actinin and Ca2+-calmodulin controls surface retention of the L-type Ca2+ channel CaV1.2. Neuron. 2013; 78(3): 483–497. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHall DD, Davare MA, Shi M, et al.: Critical role of cAMP-dependent protein kinase anchoring to the L-type calcium channel Cav1.2 via A-kinase anchor protein 150 in neurons. Biochemistry. 2007; 46(6): 1635–1646. PubMed Abstract | Publisher Full Text\n\nHall DD, Feekes JA, Arachchige Don AS, et al.: Binding of protein phosphatase 2A to the L-type calcium channel Cav1.2 next to Ser1928, its main PKA site, is critical for Ser1928 dephosphorylation. Biochemistry. 2006; 45(10): 3448–3459. PubMed Abstract | Publisher Full Text\n\nHell JW, Westenbroek RE, Breeze LJ, et al.: N-methyl-D-aspartate receptor-induced proteolytic conversion of postsynaptic class C L-type calcium channels in hippocampal neurons. Proc Natl Acad Sci U S A. 1996; 93(8): 3362–3367. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHell JW, Westenbroek RE, Warner C, et al.: Identification and differential subcellular localization of the neuronal class C and class D L-type calcium channel alpha 1 subunits. J Cell Biol. 1993a; 123(4): 949–962. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHell JW, Yokoyama CT, Breeze LJ, et al.: Phosphorylation of presynaptic and postsynaptic calcium channels by cAMP-dependent protein kinase in hippocampal neurons. EMBO J. 1995; 14(13): 3036–3044. PubMed Abstract | Free Full Text\n\nHell JW, Yokoyama CT, Wong ST, et al.: Differential phosphorylation of two size forms of the neuronal class C L-type calcium channel alpha 1 subunit. J Biol Chem. 1993b; 268(26): 19451–19457. PubMed Abstract\n\nHulme JT, Westenbroek RE, Scheuer T, et al.: Phosphorylation of serine 1928 in the distal C-terminal domain of cardiac CaV1.2 channels during beta1-adrenergic regulation. Proc Natl Acad Sci U S A. 2006a; 103(44): 16574–16579. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHulme JT, Yarov-Yarovoy V, Lin TW, et al.: Autoinhibitory control of the CaV1.2 channel by its proteolytically processed distal C-terminal domain. J Physiol. 2006b; 576(Pt 1): 87–102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKochlamazashvili G, Henneberger C, Bukalo O, et al.: The extracellular matrix molecule hyaluronic acid regulates hippocampal synaptic plasticity by modulating postsynaptic L-type Ca2+ channels. Neuron. 2010; 67(1): 116–128. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Pink MD, Murphy JG, et al.: Balanced interactions of calcineurin with AKAP79 regulate Ca2+-calcineurin-NFAT signaling. Nat Struct Mol Biol. 2012; 19(3): 337–345. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiao P, Yu D, Hu Z, et al.: Alternative splicing generates a novel truncated Cav1.2 channel in neonatal rat heart. J Biol Chem. 2015; 290(14): 9262–9272. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMa H, Groth RD, Cohen SM, et al.: γCaMKII shuttles Ca2+/CaM to the nucleus to trigger CREB phosphorylation and gene expression. Cell. 2014; 159(2): 281–294. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarrion NV, Tavalin SJ: Selective activation of Ca2+-activated K+ channels by co-localized Ca2+ channels in hippocampal neurons. Nature. 1998; 395(6705): 900–905. PubMed Abstract | Publisher Full Text\n\nMarshall MR, Clark JP 3rd, Westenbroek R, et al.: Functional roles of a C-terminal signaling complex of CaV1 channels and A-kinase anchoring protein 15 in brain neurons. J Biol Chem. 2011; 286(14): 12627–12639. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMichailidis IE, Abele-Henckels K, Zhang WK, et al.: Age-related homeostatic midchannel proteolysis of neuronal L-type voltage-gated Ca2+ channels. Neuron. 2014; 82(5): 1045–1057. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMikami A, Imoto K, Tanabe T, et al.: Primary structure and functional expression of the cardiac dihydropyridine-sensitive calcium channel. Nature. 1989; 340(6230): 230–233. PubMed Abstract | Publisher Full Text\n\nMoosmang S, Haider N, Klugbauer N, et al.: Role of hippocampal Cav1.2 Ca2+ channels in NMDA receptor-independent synaptic plasticity and spatial memory. J Neurosci. 2005; 25(43): 9883–9892. PubMed Abstract | Publisher Full Text\n\nMurphy JG, Sanderson JL, Gorski JA, et al.: AKAP-anchored PKA maintains neuronal L-type calcium channel activity and NFAT transcriptional signaling. Cell Rep. 2014; 7(5): 1577–1588. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatriarchi T, Qian H, Di Biase V, et al.: Phosphorylation of Cav1.2 on S1928 Uncouples the L-type Ca2+ Channel from the β2 Adrenergic Receptor. EMBO J. 2016; 35(12): 1330–1345. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQian H, Patriarchi T, Price JL, et al.: Phosphorylation of Ser1928 mediates the enhanced activity of the L-type Ca2+ channel Cav1.2 by the β2-adrenergic receptor in neurons. Sci Signal. 2017; 10(463): Pii: eaaf9659. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSeisenberger C, Specht V, Welling A, et al.: Functional embryonic cardiomyocytes after disruption of the L-type alpha1C (Cav1.2) calcium channel gene in the mouse. J Biol Chem. 2000; 275(50): 39193–39199. PubMed Abstract | Publisher Full Text\n\nSinnegger-Brauns MJ, Hetzenauer A, Huber IG, et al.: Isoform-specific regulation of mood behavior and pancreatic beta cell and cardiovascular function by L-type Ca2+ channels. J Clin Invest. 2004; 113(10): 1430–1439. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnutch TP, Tomlinson WJ, Leonard JP, et al.: Distinct calcium channels are generated by alternative splicing and are differentially expressed in the mammalian CNS. Neuron. 1991; 7(1): 45–57. PubMed Abstract | Publisher Full Text\n\nSplawski I, Timothy KW, Sharpe LM, et al.: Ca(V)1.2 calcium channel dysfunction causes a multisystem disorder including arrhythmia and autism. Cell. 2004; 119(1): 19–31. PubMed Abstract | Publisher Full Text\n\nTaylor AB, Fallon JR: Dendrites contain a spacing pattern. J Neurosci. 2006; 26(4): 1154–1163. PubMed Abstract | Publisher Full Text\n\nTippens AL, Pare JF, Langwieser N, et al.: Ultrastructural evidence for pre- and postsynaptic localization of Cav1.2 L-type Ca2+ channels in the rat hippocampus. J Comp Neurol. 2008; 506(4): 569–583. PubMed Abstract | Publisher Full Text\n\nTseng PY, Henderson PB, Hergarden AC, et al.: α-Actinin Promotes Surface Localization and Current Density of the Ca2+ Channel CaV1.2 by Binding to the IQ Region of the α1 Subunit. Biochemistry. 2017; 56(28): 3669–3681. PubMed Abstract | Publisher Full Text\n\nWatschinger K, Horak SB, Schulze K, et al.: Functional properties and modulation of extracellular epitope-tagged CaV2.1 voltage-gated calcium channels. Channels (Austin). 2008; 2(6): 461–473. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWei X, Neely A, Lacerda AE, et al.: Modification of Ca2+ channel activity by deletions at the carboxyl terminus of the cardiac alpha 1 subunit. J Biol Chem. 1994; 269(3): 1635–1640. PubMed Abstract\n\nWestenbroek RE, Hell JW, Warner C, et al.: Biochemical properties and subcellular distribution of an N-type calcium channel alpha 1 subunit. Neuron. 1992; 9(6): 1099–1115. PubMed Abstract | Publisher Full Text\n\nWheeler DG, Groth RD, Ma H, et al.: CaV1 and CaV2 channels engage distinct modes of Ca2+ signaling to control CREB-dependent gene expression. Cell. 2012; 149(5): 1112–1124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhite JA, McKinney BC, John MC, et al.: Conditional forebrain deletion of the L-type calcium channel CaV1.2 disrupts remote spatial memories in mice. Learn Mem. 2008; 15(1): 1–5. PubMed Abstract | Publisher Full Text\n\nZhu Y, Romero MI, Ghosh P, et al.: Ablation of NF1 function in neurons induces abnormal development of cerebral cortex and reactive gliosis in the brain. Genes Dev. 2001; 15(7): 859–876. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "24428",
"date": "26 Jul 2017",
"name": "Annette C. Dolphin",
"expertise": [
"Reviewer Expertise Voltage gated calcium channel trafficking"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes an attempt at replication of the results of Michailidis et al, regarding the existence and relevance of mid-channel proteolysis of CaV1.2, previously described as a homeostatic mechanism to regulate channel activity (Michailidis et al., 20141). Mid channel proteolysis was shown in that paper to occur in the II-III linker and to result in prominent 150 kDa and 90 kDa bands.\nThe careful work of Buonarati et al describes here the use of multiple different CaV1.2 antibodies and % gels, and shows conclusively that the most prominent size forms are 250 and 210 kDa, relating to full length and C-terminally cleaved CaV1.2 channels in brain tissue. A 150 kDa band was also observed with two of the antibodies, but was still as prominent in KO mouse tissue, indicating it is not a CaV1.2-related fragment. The other antibodies identified only very minor 150 kDa bands, estimated to be <1% of the total, and the authors conclude mid-channel proteolysis is minimal in brain tissue.\nIn the Discussion the authors describe several possibilities that could account for the disparity of results, including % gels used and gel run times, as well as antibody specificity, leading to mis-identification of bands. They also critique another result in the original paper, partial lack of co-localization of an N-terminal-GFP tag and an extracellular HA tag in CaV1.2, which was also originally attributed to mid-channel proteolysis. They point out that GFP is quenched by paraformaldehyde and fixation also induces partial permeabilization of hippocampal neurons in culture. In my view this is a completely reasonable comment, and probably should have been picked up by the original referees.\nIn doing this painstaking study, the authors have also done a great service to the community by comparing multiple different antibodies to CaV1.2.\n\nMinor comments.\nDo the authors have any clues about the 150 kDa band, is it a contaminant from blood? Have the authors ever perfused mice or rats with ice-cold saline before harvesting the brain, to determine whether it is reduced?\n\nFig. 5B, identify the MW markers on the left.\n\nFig 6 might be easier to grasp rapidly if the authors added IB: CNC1 on the left, and put IP next to all the Ab labels on the bottom of each blot.\n\nI may have missed how the authors quantified any mid-channel proteolytic processing to be ~ 1% (page 12).\n\nIn the Abstract, I suggest two changes:\n- line 5 change “Recent work suggests..” to “However, recent work further suggests”\n- last line change “at best” to “at most”\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "24425",
"date": "26 Jul 2017",
"name": "Jörg Striessnig",
"expertise": [
"Reviewer Expertise pharmacology and biochemistry of voltage gated L-type calcium channels"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBuonarati et al. provide a long overdue answer to a previous publication in NEURON (their ref Michailidis et al,. 2014) which reported the possibility of proteolytic processing of Cav1.2 Ca2+ channel a1-subunits within the II-III cytoplasmic loop giving rise to a 150 kDa polypeptide. Based on this finding further experiments suggested a functional role of this \"midchannel proteolysis\", in particular a homeostatic feedback regulation affecting neuronal signaling. However, these findings were heavily criticized by experts familiar with the biochemistry of Cav1.2 channels because essential controls for the existence of such a 150 kDa polypeptide were missing in the publication of Michailidis et al. (2014)1.\nFirst, the widely established and accepted golden standard to verify the specificity of antibodies is to use identically prepared samples from knockout animals. In the case of Cav1.2 these animals are widely available from several groups and frozen brains can be easily shipped on ice. Second, the molecular mass standards used in the paper to provide convincing support for such a far-reaching conclusion are insufficiently described. Third, no efforts have been made to systematically verify the mass of the 150 kDa polypeptide by a recombinant HEK-293 cell – expressed protein of the proposed sequence (also standard in the field). Given the potential impact of the existence of midchannel proteolysis for the Ca2+ channel field it is difficult to rationalize why reviewers of this paper have not insisted on these simple controls.\nFortunately, Buonarati and colleagues have now performed exactly these experiments at the highest possible technical level. And it is therefore not surprising that they cannot confirm a significant level of midchannel proteolysis in mouse or rat brain, even under conditions very similar to those used by Michailidis et al.. The experiments were repeated with 6 different antibodies, carefully selected molecular mass markers and also using different polyacrylamide concentrations to account for the possible aberrant migration of these large membrane proteins. Their findings are clearly presented and nicely discussed and offer several potential explanations for the discrepant findings.\nIn summary, this publication is an excellent contribution to this field and will hopefully end the discussion about midchannel proteolysis.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "24429",
"date": "16 Aug 2017",
"name": "Mark L. Dell’Acqua",
"expertise": [
"Reviewer Expertise L-type Ca2+ channel regualtion",
"PKA signaling",
"neuronal synaptic plasticity"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBuonarati et al. have performed a very rigorous biochemical analysis using 6 different antibodies and conditional knockout (KO) mice controls to convincingly demonstrate that the major forms of the L-channel CaV1.2 pore forming subunit present in mouse and rat brain are full-length channels of ~250 kDa and C-terminally truncated channels of ~210 kDa. Importantly, the authors' exhaustive analysis using immunoblotting of whole cell extracts as well as immunoprecipitation or surface biotinylation followed by immunoblotting provides no evidence for CaV1.2 mid-channel cleavage that would produce ~150 kDa and ~90 kDa bands as reported in Michailidis et al., 20141. In particular, by employing neuron-specific CaV1.2 conditional KO mice as controls in conjunction with multiple antibodies spanning the N- to C-terminal regions of CaV1.2, Buonarati et al. demonstrate that the previously reported ~150 kDa and ~90 kDa bands are most likely protein products unrelated to CaV1.2 that cross-react with some but not all CaV1.2 antibodies.\nOverall, this study is extremely well executed and thoughtfully discussed to provide a very valuable addition to the L-channel literature. I only have one minor question/comment for the authors.\nThe authors cite previous publications from the Hosey and Catterall groups showing that the ~40 kDa distal C-terminal fragment produced by cleavage of CaV1.2 remains associated with the ~210 kDa fragment to regulate channel function (Fuller et al., 20102; Gao et al., 20013; Hulme et al., 2006b4). However, these prior studies were focused on CaV1.2 cleavage in muscle cells and primarily relied on reconstitution of the association between the 40 kDa distal C-terminal fragment and the ~210 kDa body of the channel by heterologous expression. Thus, it would be interesting if the authors could determine if the ~40 kDA distal C-terminal fragment of CaV1.2 is also present in mouse and rat brain and whether this fragment can co-immunoprecipitate with the ~210 kDa fragment. Such additional information could be valuable in understanding the differences in PKA regulation of CaV1.2 channels through phosphorylation at S1928 in the distal C-terminus versus S1700 in the proximal C-terminus recently reported by the authors for neurons ( Qian et al., 20175) compared to earlier work by others for cardiac myocytes (Fuller et al., 20102; Moosmang et al., 20056).\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1166
|
https://f1000research.com/articles/7-691/v1
|
01 Jun 18
|
{
"type": "Research Article",
"title": "Using different methods to process forced expiratory volume in one second (FEV1) data can impact on the interpretation of FEV1 as an outcome measure to understand the performance of an adult cystic fibrosis centre: A retrospective chart review",
"authors": [
"Zhe Hui Hoo",
"Muhaned S.A. El-Gheryani",
"Rachael Curley",
"Martin J. Wildman",
"Zhe Hui Hoo",
"Muhaned S.A. El-Gheryani",
"Rachael Curley"
],
"abstract": "Background: Forced expiratory volume in one second (FEV1) is an important cystic fibrosis (CF) prognostic marker and an established endpoint for CF clinical trials. FEV1 is also used in observation studies, e.g. to compare different centre’s outcomes. We wished to evaluate whether different methods of processing FEV1 data can impact on a centre’s outcome. Methods: This is a single-centre retrospective analysis of routinely collected data from 2013-2016 which included 208 adults with CF. Year-to-year %FEV1 change was calculated by subtracting best %FEV1 at Year 1 from Year 2 (i.e. negative values indicate %FEV1 decline), and compared using Friedman test. Three methods were used to process %FEV1 data. First, %FEV1 calculated with Knudson equation was extracted directly from spirometer machines. Second, FEV1 volume were extracted then converted to %FEV1 using clean height data and Knudson equation. Third, FEV1 volume were extracted then converted to %FEV1 using clean height data and GLI equation. In addition, %FEV1 decline calculated using GLI equation was adjusted for baseline %FEV1 to understand the impact of case-mix adjustment. Results: There was a trend of reduction in %FEV1 decline with all three data processing methods but the magnitude of %FEV1 decline differed. Median change in %FEV1 for 2013-2014, 2014-2015 and 2015-2016 was –2.0, –1.0 and 0.0 respectively using %FEV1 in Knudson equation whereas the median change was –1.1, –0.9 and –0.3 respectively using %FEV1 in the GLI equation. A statistically significant p-value (0.016) was only obtained when using %FEV1 in Knudson equation extracted directly from spirometer machines. Conclusions: Although the trend of reduction in %FEV1 decline was robust, different data processing methods yielded varying results when %FEV1 decline was compared using a standard related group non-parametric statistical test. Observational studies with %FEV1 decline as an outcome measure should carefully consider and clearly specify the data processing methods used.",
"keywords": [
"Cystic fibrosis",
"epidemiology",
"patient outcome assessment",
"forced expiratory volume"
],
"content": "Introduction\n\nCystic fibrosis (CF) is a multi-system genetic condition but the two main affected organs are lungs (resulting in recurrent infections and respiratory failure) and gastrointestinal tract (resulting in fat malabsorption and poor growth)1. Median survival has improved to 45 years, in part because of improvement in care quality2. An important quality improvement initiative is benchmarking, which involves identifying high-performing centres and the practices associated with outstanding performance3–5. Since forced expiratory volume in one second (FEV1) is an important CF prognostic marker6–9, it is often used as an outcome measure for benchmarking3–5,10.\n\nDifferent statistical methods of analysing FEV1 data can yield different results11, but there is scant attention paid to the methods of processing FEV1 data. We previously reported a statistically significant reduction in %FEV1 decline for our CF centre from 2013–201612. We now set out to understand the impact of using different FEV1 data processing methods on our CF centre’s outcome.\n\n\nMethods\n\nThis is a single-centre retrospective analysis of routinely collected clinical data from 2013–2016. Regulatory approval for the analysis was obtained from NHS Health Research Authority (IRAS number 210313). All adults with CF diagnosed according to the UK CF Trust criteria aged ≥16 years were included, except those with lung transplantation or on ivacaftor. These treatments have transformative effects on %FEV113–15, thus may affect the interpretation of %FEV1 decline.\n\nDemographic data (age, gender, genotype, pancreatic status, CF related diabetes, Pseudomonas aeruginosa status), body mass index (BMI) and FEV1 data were collected by two investigators (HZH and RC / HZH and MEG) independently reviewing paper notes and electronic records. Where data from the two investigators differ, the original data from paper notes or electronic records were reviewed to by both investigators to ensure the accuracy of abstracted data. This process ensures the accuracy of abstracted data and helps avoid potential bias from inaccurate or inconsistent data collection16. FEV1 data were processed with three different methods prior to analysis. First, %FEV1 readings (calculated with Knudson equation17 and available in whole numbers) were directly extracted from spirometer machines. Second, FEV1 volumes (in litres, to two decimal places) were extracted and clean height data were used to calculate %FEV1 (as whole numbers) with Knudson equation17. Third, FEV1 volumes (in litres, to two decimal places) were extracted and clean height data were used to calculate %FEV1 with GLI equation18 using an Excel Macro (Microsoft Excel 2013).\n\nBest %FEV1, i.e. the highest %FEV1 reading in a calendar year for each study subject was used for analysis since it is most reflective of the true baseline %FEV119. Year-to-year %FEV1 change was calculated by subtracting best %FEV1 at Year 1 from Year 2 (i.e. negative values indicate %FEV1 decline and positive values indicate increase in %FEV1). In addition to calculating year-to-year %FEV1 change using three different FEV1 data processing methods, %FEV1 change calculated with GLI equation was also adjusted for baseline %FEV1 using reference values from Epidemiologic Study of CF (ESCF)20. The ESCF study found median %FEV1 change of –3%/year, –2%/year and –0.5%/year for baseline %FEV1 ≥100%, 40–99.9% and <40% respectively20. Adjusted %FEV1 change was calculated by subtracting median ESCF %FEV1 change from actual %FEV1 change. Thus, an adjusted %FEV1 change >0 meant the subject’s %FEV1 decline was less than expected (indicating better health outcome) whilst an adjusted %FEV1 change <0 meant the subject’s %FEV1 decline was more than expected (indicating worse health outcome). %FEV1 change from 2013–2014 to 2015–2016 calculated using different FEV1 data processing methods were compared using Friedman test. Analyses were performed using SPSS v24 (IBM Corp) and p-value <0.05 was considered statistically significant.\n\n\nResults\n\nThis analysis included 208 adults, with 147 adults providing data for all four years. Overall, the cohort was ageing but baseline %FEV1 increased from 2014 onwards (see Table 1).\n\n¶ Genotype status as defined by international consensus23. Homozygous class I-III mutations indicate ‘severe genotype’.\n\n† Pancreatic insufficiency was diagnosed by the clinical team on the basis of ≥2 faecal pancreatic elastase levels <200µg/g stool and symptoms consistent with maldigestion and malabsorption, in accordance to the UK Cystic Fibrosis (CF) Trust guideline.\n\n‡ CF related diabetes was diagnosed by the clinical team on the basis of oral glucose tolerance test and continuous subcutaneous glucose monitoring results, in accordance to the UK CF Trust guideline.\n\n§ Pseudomonas aeruginosa status was determined according to the Leeds criteria24.\n\nThe %FEV1 increase was in part due to younger adults with higher %FEV1 transitioning from paediatric care because %FEV1 tended to decline from year to year (see Table 2). However, different %FEV1 decline results were obtained with different FEV1 data processing methods. There was statistically significant reduction in the rate of %FEV1 decline using %FEV1 readings as recorded in spirometer machines (p=0.016). Cleaning of height data and standardisation of %FEV1 calculation with Knudson equation17 did not alter the magnitude of %FEV1 decline, but the p-value was no longer statistically significant (p=0.062). The use of GLI equation altered the magnitude of %FEV1 decline although the trend of reduction in %FEV1 decline persisted (p=0.135). Adjustment for baseline %FEV1 further increased the p-value (p=0.210).\n\nESCF - Epidemiologic Study of cystic fibrosis\n\n† The vast majority of the %FEV1 data were from spirometer machines at the Sheffield Adult cystic fibrosis (CF) centre, which were calculated with Knudson equation17 in whole numbers. Some %FEV1 data were from spirometer machines at the Pulmonary Function Unit which operationalised the Knudson equation differently; by calculating age to one decimal place to determine the predicted FEV1. These spirometer machines also provided %FEV1 to two decimal places, but this was rounded to whole numbers for the purpose of analysis. These results were presented at the 2017 North American CF Conference and were published as an abstract in Pediatric Pulmonology12.\n\n‡ FEV1 volumes were available in litres to two decimal places from spirometer machines. Height data were also extracted to allow the calculation of predicted FEV1. This led us to uncover the inconsistency recording of height, which affected 30–40% of the study subjects and would have introduced erroneous variability to the %FEV1 because all equations for predicted %FEV1 are dependent on height. Height data were cleaned to weed out error. Where there was uncertainty regarding the height, the higher value was used to obtain a conservative estimate of %FEV1. To replicate calculation process of the spirometer machines at the Sheffield Adult CF centre, age was rounded down to a whole number and predicted FEV1 in volume were calculated to two decimal places using Knudson equation17. This was used to derive the %FEV1, which was then rounded to whole numbers for the purpose of analysis.\n\nϕ FEV1 and height data were extracted as above. %FEV1 was calculated using the GLI equation18 using an Excel Macro available at the European Respiratory Society website.\n\n§ %FEV1 calculated using the GLI equation18 as described above, then adjusted for baseline %FEV1 as described in the ‘Methods’ section. An adjusted %FEV1 change of >0 meant the subject’s %FEV1 decline was less than expected for his / her baseline %FEV1, indicating better health outcomes.\n\n\nDiscussion\n\nWe demonstrated that different centre-level %FEV1 decline results were obtained using different FEV1 data processing methods. In particular, year-on-year %FEV1 decline was smaller in magnitude when %FEV1 was calculated using GLI equation18 instead of Knudson equation17. This is in part due to the demographic of our centre which has a relatively young adult population. A previous study found a near-linear %FEV1 decline from childhood to adulthood with GLI equation, whereas there was accelerated %FEV1 decline during adolescence and young adulthood when %FEV1 was calculated with Knudson equation21. One advantage of using the GLI equation, which is seamless across all ages, is that it improves the interpretation of %FEV1 decline21,22. Another advantage is that %FEV1 decline can be adjusted for baseline %FEV1 using ESCF reference values (since the ESCF values for %FEV1 decline were calculated using the GLI equation20).\n\nThe limitation for all single-centre analysis is the potential lack of generalisability. Another limitation of our analysis is that the ESCF reference values used to adjust %FEV1 decline were derived using a cohort from around 15 years ago20, and may not represent the current population. Our results nonetheless highlighted that %FEV1 decline can be extremely sensitive to the FEV1 data processing methods. This is one of the challenges of using %FEV1 decline to infer quality of care. Another challenge is that %FEV1 lacks sensitivity as an outcome measure. A recent sample size estimation using the UK CF registry data suggests that 273 adults per centre are needed to detect a 5% FEV1 difference at the 95% significance level25. The sensitivity of measures used to detect variations in care quality is particularly pertinent to CF because a relatively small population is spread across many centres. Indeed, only 6/28 (21.4%) of all UK adult CF centres have ≥273 adults. That means process measures, e.g. medication adherence, is important to detect variations in quality of CF care. Mant & Hicks previous demonstrated that measuring processes of care proven in randomised controlled trials to reduce death allows detection of meaningful differences in care quality for myocardial infarction with just 75 cases, whereas 8179 cases would be needed if mortality was used as the quality indicator26.\n\nGiven the limitations of FEV1 as an outcome measure in CF, results of centre comparisons based on FEV1 data should be carefully interpreted. Observational studies with %FEV1 decline as an outcome measure should carefully consider and clearly specify the data processing methods used.\n\n\nEthical considerations\n\nRegulatory approval for the analysis was obtained from NHS Health Research Authority (IRAS number 210313).\n\n\nData availability\n\nDataset 1: Sheffield forced expiratory volume in one second (FEV1) data 10.5256/f1000research.14981.d20560327",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this piece of work.\n\n\nReferences\n\nElborn JS: Cystic fibrosis. Lancet. 2016; 388(10059): 2519–2531. PubMed Abstract | Publisher Full Text\n\nStevens DP, Marshall BC: A decade of healthcare improvement in cystic fibrosis: lessons for other chronic diseases. BMJ Qual Saf. 2014; 23 Suppl 1: i1–2. PubMed Abstract | Publisher Full Text\n\nBoyle MP, Sabadosa KA, Quinton HB, et al.: Key findings of the US Cystic Fibrosis Foundation's clinical practice benchmarking project. BMJ Qual Saf. 2014; 23 Suppl 1: i15–22. PubMed Abstract | Publisher Full Text\n\nSchechter MS: Benchmarking to improve the quality of cystic fibrosis care. Curr Opin Pulm Med. 2012; 18(6): 596–601. PubMed Abstract | Publisher Full Text\n\nStern M, Niemann N, Wiedemann B, et al.: Benchmarking improves quality in cystic fibrosis care: a pilot project involving 12 centres. Int J Qual Health Care. 2011; 23(3): 349–356. PubMed Abstract | Publisher Full Text\n\nLiou TG, Adler FR, Fitzsimmons SC, et al.: Predictive 5-year survivorship model of cystic fibrosis. Am J Epidemiol. 2001; 153(4): 345–352. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorey M, Edwards L, Levison H, et al.: Longitudinal analysis of pulmonary function decline in patients with cystic fibrosis. J Pediatr. 1997; 131(6): 809–814. PubMed Abstract | Publisher Full Text\n\nRosenbluth DB, Wilson K, Ferkol T, et al.: Lung function decline in cystic fibrosis patients and timing for lung transplantation referral. Chest. 2004; 126(2): 412–419. PubMed Abstract | Publisher Full Text\n\nKonstan MW, VanDevanter DR, Sawicki GS, et al.: Association of High-Dose Ibuprofen Use, Lung Function Decline, and Long-Term Survival in Children with Cystic Fibrosis. Ann Am Thorac Soc. 2018; 15(4): 485–493. PubMed Abstract | Publisher Full Text\n\nWagener JS, Elkin EP, Pasta DJ, et al.: Pulmonary function outcomes for assessing cystic fibrosis care. J Cyst Fibros. 2015; 14(3): 376–383. PubMed Abstract | Publisher Full Text\n\nSzczesniak R, Heltshe SL, Stanojevic S, et al.: Use of FEV1 in cystic fibrosis epidemiologic studies and clinical trials: A statistical perspective for the clinical researcher. J Cyst Fibros. 2017; 16(3): 318–326. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoo ZH, Curley R, Walters SJ, et al.: Real world evidence of sustained improvement in objective adherence to maintenance inhaled therapies in an adult cystic fibrosis centre [abstract]. Pediatr Pulmonol. 2017; 52(Suppl 47): S488. Reference Source\n\nInci I, Stanimirov O, Benden C, et al.: Lung transplantation for cystic fibrosis: a single center experience of 100 consecutive cases. Eur J Cardiothorac Surg. 2012; 41(2): 435–440. PubMed Abstract | Publisher Full Text\n\nLynch JP 3rd, Sayah DM, Belperio JA, et al.: Lung transplantation for cystic fibrosis: results, indications, complications, and controversies. Semin Respir Crit Care Med. 2015; 36(2): 299–320. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRamsey BW, Davies J, McElvaney NG, et al.: A CFTR potentiator in patients with cystic fibrosis and the G551D mutation. N Engl J Med. 2011; 365(18): 1663–1672. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGilbert EH, Lowenstein SR, Koziol-McLain J, et al.: Chart reviews in emergency medicine research: Where are the methods? Ann Emerg Med. 1996; 27(3): 305–308. PubMed Abstract | Publisher Full Text\n\nKnudson RJ, Lebowitz MD, Holberg CJ, et al.: Changes in the normal maximal expiratory flow-volume curve with growth and aging. Am Rev Respir Dis. 1983; 127(6): 725–34. PubMed Abstract\n\nQuanjer PH, Stanojevic S, Cole TJ, et al.: Multi-ethnic reference values for spirometry for the 3-95-yr age range: the global lung function 2012 equations. Eur Respir J. 2012; 40(6): 1324–1343. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiou TG, Elkin EP, Pasta DJ, et al.: Year-to-year changes in lung function in individuals with cystic fibrosis. J Cyst Fibros. 2010; 9(4): 250–256. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorgan WJ, VanDevanter DR, Pasta DJ, et al.: Forced Expiratory Volume in 1 Second Variability Helps Identify Patients with Cystic Fibrosis at Risk of Greater Loss of Lung Function. J Pediatr. 2016; 169: 116–21.e2. PubMed Abstract | Publisher Full Text\n\nStanojevic S, Bilton D, McDonald A, et al.: Global Lung Function Initiative equations improve interpretation of FEV1 decline among patients with cystic fibrosis. Eur Respir J. 2015; 46(1): 262–4. PubMed Abstract | Publisher Full Text\n\nStanojevic S, Stocks J, Bountziouka V, et al.: The impact of switching to the new global lung function initiative equations on spirometry results in the UK CF registry. J Cyst Fibros. 2014; 13(3): 319–27. PubMed Abstract | Publisher Full Text\n\nCastellani C, Cuppens H, Macek M Jr: Consensus on the use and interpretation of cystic fibrosis mutation analysis in clinical practice. J Cyst Fibros. 2008; 7(3): 179–196. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee TW, Brownlee KG, Conway SP, et al.: Evaluation of a new definition for chronic Pseudomonas aeruginosa infection in cystic fibrosis patients. J Cyst Fibros. 2003; 2(1): 29–34. PubMed Abstract | Publisher Full Text\n\nNightingale JA, Osmond C: Does current reporting of lung function by the UK cystic fibrosis registry allow a fair comparison of adult centres? J Cyst Fibros. 2017; 16(5): 585–591. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMant J, Hicks N: Detecting differences in quality of care: the sensitivity of measures of process and outcome in treating acute myocardial infarction. BMJ. 1995; 311(7008): 793–796. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoo ZH, El-Gheryani MSA, Curley R, et al.: Dataset 1 in: Using different methods to process forced expiratory volume in one second (FEV1) data can impact on the interpretation of FEV1 as an outcome measure to understand the performance of an adult cystic fibrosis centre: A retrospective chart review. F1000Research. 2018. Data Source"
}
|
[
{
"id": "34828",
"date": "06 Jul 2018",
"name": "Edward McKone",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFEV1 as a percent of predicted is widely used as an outcome measure in patients with cystic fibrosis and is one of the metrics used to compare centres or countries in benchmarking exercises. This manuscript presents data showing that differences in data processing and the use of different reference equations used to estimate FEV1 as a percent predicted can result varying estimates of lung disease changes and potentially impact comparisons of centres/countries.\n\nThe paper supports the standardization of FEV1 collection and reference equations which is currently in development by CF International Registries. It also highlights that different approaches to data collection can impact the interpretation of statistical analyses.\n\nComments:\n\nDifferences in FEV1 percent predicted using different equations is well known (Rosenfeld et al1 and more recently in the cited UK/US comparison study). For this reason, the GLI have been recently accepted as the standard for most CF registries.\n\nAlthough year to year subtraction is a method of looking at longitudinal changes, regression methodology is preferable to analyse these changes, especially, as in this case, where you have 3 time points. This also allows to adjust for baseline factors such as lung disease severity.\n\nThe method of adjustment for baseline Iung function is a bit crude. The medians subtracted are from a US population over 10 years ago and are likely to overestimate lung function decline in this population. In the Morgan et al, J Pediatr 2016 paper cited, the benefits of using this type of adjustment was shown using regression.\n\nDid their statistical approach factor in that these were repeated measures in the same patients?\n\nBland & Altman plots comparing different reference equations could be considered.\n\nThe results suggest that height inaccuracy is impacting the results. As this is a single centre study, it is difficult to determine is this is a more universal problem.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3903",
"date": "17 Aug 2018",
"name": "Zhe Hui Hoo",
"role": "Author Response",
"response": "We thank Prof McKone for the review and we will iterate the manuscript taking into account the suggestion to compare the different reference equations (Knudson vs GLI) using Bland-Altman analysis. We concur the GLI has been recently accepted as the standard for most CF registries. We concur that regression analyses is preferable to determine FEV1 decline. As recommended by Prof Burgel, we will replace the term \"FEV1 decline\" with \"year-to-year FEV1 variation\" in the revised manuscript. We concur that the method used to adjust year-to-year FEV1 variation for baseline FEV1 is crude. The displayed data from the ESCF paper is only presented according to the four FEV1 categories, hence our choice of adjustment method. Given the limited number of subjects within the Sheffield dataset, we felt is it is more appropriate to use reference values for suitably large datasets instead of simply calculating the predicted %FEV1 change using the Sheffield dataset. There are more recent reference values for FEV1 from the ECFSPR (Boëlle et al, 2012) and Canadian registry (Kim et al, 2018); however those papers do not provide reference values for year-to-year FEV1 variation. Our statistical method account for repeated FEV1 measures since: 1. by using best FEV1, there is only x1 FEV1 reading per person per year 2. only x1 FEV1 reading per person was used to calculate the year-to-year FEV1 variation As mentioned in the discussion section, we concur that a single-centre study may not be generalisable. However, inaccurate data recording within routine datasets (e.g. CF registries) is unlikely to be an isolated problem. For example, the letter by Hartley et al (2016) in JCF revealed that 6% of the adults with CF at the Manchester Adult CF Centre had incorrect genotype data recorded in the UK CF registry."
}
]
},
{
"id": "34826",
"date": "17 Jul 2018",
"name": "Pierre-Régis Burgel",
"expertise": [
"Reviewer Expertise Adult pulmonologist with experience in the care of adults with cystic fibrosis. Researcher."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors performed a retrospective analysis of FEV1% predicted data over 3 years in an adult CF center in the UK. They examined FEV1 decline from year to year by calculating variation in best FEV1 during two consecutive years and examined the impact of using data obtained using Knudson equation (directly extracted from the spirometer or recalculated with the appropriate height) vs. GLI equation. They also performed an adjustment using ESCF data.\nThe authors concluded that trends in FEV1 decline were robust among methods, although the results were somewhat different using different methods/equations.\nThe study has some interest in highlighting problems associated with these type of calculations, especially when used for benchmarking (as in the UK).\nI have the following comments for improvement:\n\nAn important drawback of Knusdon equation is related to the change of FEV1 in the transition from pediactric to adults. This is why the GLI is nowadays often used in mixed pediatric/adult population. The authors used the UK definition of adults (over 16 years) and suggested that some of the difference in their results between Knudson and GLI data are due to the younger patients in this cohorts. I would be happier if the authors could perform a sensitivity analysis using only patients 18 years an over? This would miniminze the Knusdon/GLI age bias and would make these results more relevant to the adult centres outside of UK. Looking at Table 1, it seems that only a minority of patients were below 18 years. I think the word FEV1 decline is inappropriate in this manuscript. A year to year variation (even over 3 years) is not a decline. For calculating a decline, you would need multiple data points (at the very least 3 data points) and perform more complicated analyses (e.g., mixed model analysis). I would suggest to remove the word decline from the manuscript as the main goal of the authors did not appear to be FEV1 decline but mostly year to year FEV1 variation which is used for benchmarking in the UK.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3902",
"date": "17 Aug 2018",
"name": "Zhe Hui Hoo",
"role": "Author Response",
"response": "We thank Prof Burgel for the review and we will iterate the manuscript taking into account the two very useful suggestions, i.e. 1. we will perform a sensitivity analysis for the results in Table 2 using only adults aged 18 years and above 2. we will replace the term \"FEV1 decline\" with \"year-to-year FEV1 variation\""
}
]
}
] | 1
|
https://f1000research.com/articles/7-691
|
https://f1000research.com/articles/7-1296/v1
|
15 Aug 18
|
{
"type": "Software Tool Article",
"title": "Quantification of cancer cell migration with an integrated experimental-computational pipeline",
"authors": [
"Edwin F Juarez",
"Carolina Garri",
"Ahmadreza Ghaffarizadeh",
"Paul Macklin",
"Kian Kani",
"Carolina Garri",
"Ahmadreza Ghaffarizadeh",
"Paul Macklin",
"Kian Kani"
],
"abstract": "We describe an integrated experimental-computational pipeline for quantifying cell migration in vitro. This pipeline is robust to image noise, open source, and user friendly. The experimental component uses the Oris cell migration assay (Platypus Technologies) to create migration regions. The computational component of the pipeline creates masks in Matlab (MathWorks) to cell-covered regions, uses a genetic algorithm to automatically select the migration region, and outputs a metric to quantify cell migration. In this work we demonstrate the utility of our pipeline by quantifying the effects of a drug (Taxol) and of the extracellular Anterior Gradient 2 (eAGR2) protein on the migration of MDA-MB-231 cells (a breast cancer cell line). In particular, we show that inhibiting eAGR2 reduces migration of MDA-MB-231 cells.",
"keywords": [
"Migration",
"Quantification",
"User-Friendly",
"Microscopy."
],
"content": "Introduction\n\nIn order to understand and treat cancer, we need to study and ultimately control metastasis1. A key aspect of metastasis is cell migration1. Thus, assays that can reliably provide quantitative readouts of cell migration are an important component of cancer research. Here we describe an integrated computational pipeline to quantify cell migration using fluorescent microscopy.\n\nThe anterior gradient protein 2 (AGR2) has been shown to promote cell migration2,3. High expression of AGR2 is correlated with aggressive forms of various adenocarcinomas including prostate3 and breast cancer4,5. Therefore, AGR2 is a biomarker and therapeutic target that will provide a suitable biological readout of cell migration for our pipeline to quantify. In other words, because AGR2 is known to promote cell migration, it is ideal for testing a computational platform’s ability to detect changes in cell migration.\n\nIn this work, we describe an experimental and computational pipeline to quantitate cell migration. We demonstrate this pipeline by quantifying migration of MDA-MB-231 cells, a breast cancer cell line known to migrate aggressively6. We show that blocking the extracellular AGR2 (eAGR2) with a neutralizing antibody that binds specifically to AGR2 (referred to as AGR2-Ab) in cell medium prevents the migration of the MDA-MB-231 cells3. Our pipeline aids in the verification of well-established hypotheses and it can be used to test new hypotheses, thus aiding in and accelerating the drug discovery process.\n\n\nMethods\n\nMDA-MB-231 cell line was obtained from the American Type Culture Collection (ATCC, No. HTB-26) and tested for mycoplasma contamination before usage. Cells were maintained in DMEM media (Thermo Fisher, No. 21063045), supplemented with 10% fetal bovine serum (FBS), they were kept at 37°C in a humidified incubator with 5% CO2. Cells were used within 6 months of purchase.\n\nThe Oris™ migration assay (No. CMA1.101, Platypus Technologies, Madison, WI, USA) uses a physical barrier “stopper” to create a defined circular region that is intended to prevent cell adhesion at the start of the assay. This central cell-free detection zone is in the center of each well of a 96-well plate. As the cells migrate to the cell-free zone over 24–48 hours, real-time assessment of migratory cells allows acquisition of richer data sets. Since there are no artificial membranes or inserts in the light path through which cells must pass, this assay is amenable to quantification with microscopy. We used the Oris cell migration assay from Platypus Technologies to create migration regions by inserting stoppers in each of the 96 wells on a plate. Shortly after inserting the stoppers, we seeded MDA-MB-231 cells and waited until they reached 80% confluent (approximately 24 hours). We then fed cells with either treated or untreated media. Treated media included Taxol (5 nM), mouse anti-AGR2 antibody (1:50 of 2 μg/ml) (Santa Cruz Biotechnology, No. sc-101211), and mouse-IgG (Santa Cruz Biotechnology, No. sc-2005), while the untreated media was Dimethyl sulfoxide (DMSO) (VWR, No. 97061-250). Next we removed the stoppers and allowed the cells to move into the migration region. 48 hours after removal of the stoppers, we imaged each well using a fluorescence microscope (Zeoiss Observer.Z1 microscope at 2.5x objective with AxioCam MRm camera and Axio Vision 4.8 software).\n\nThis tool reads microscopy images which are placed in the same folder as the main code. First, the script first trains using the negative control (i.e., the image of a well where the stopper was not removed) to find the optimal disk which represents the migration region. Next, the script finds all the images present in the same folder as the code (by default it looks for TIF files) and applies the migration quantification metric to each of them, recording its output in a text file called output_c3.txt for easy access and plotting (a Python 3 script which creates a publication-quality plot based on this output is provided as well for any user who wishes to use it).\n\nTypically a user only needs to make one modification the main script called code.m, make one minor modification to the img_name variable (on the third line of the script), and run the script. The change to the img_name variable needs to reflect the name of the image which contains the negative control.\n\nIf the user desires change the format of the images the script uses for the quantification, the line file_list= dir('* .tif'); needs to change to reflect the desired format. The images need to be in the same directory as the script.\n\nThis tool has been tested in multiple laptops running Matlab spanning releases 2015a–2018a.\n\nWe first created a mask corresponding to the area covered by cells using standard deviation filtering and applying a series of morphological operations in Matlab R2016a (MathWorks) as shown in Figure 1.\n\nIn order to identify the migration region, we took images of each well (left), then we select a mask that covers the area utilized by cells, highlighted in green (right).\n\nNote that these images are gray scale (green is used throughout to highlight software outputs as is shown in the right panel of Figure 1), hence every pixel’s value belongs to the interval [0,1] where a completely black pixel has value 0 and a completely white pixel has value 1. Also note that a mask is a binary matrix that indicate which pixels belong to the mask (with value of 1, these pixels are referred to as “cell pixels”) and which pixels do not belong to the mask (with value 0).\n\nWe then used a genetic algorithm to determine the coordinates of the center and the radius of a circle according to Equation (1). This optimal circle determines the migration region.\n\n\n\nwhere cx*, cy*, and r* are the optimal parameters of the migration region, M is the Matlab mask we are evaluating (i.e., a circle with center at coordinates (cx, cy) and radius r), so ∑mi,j ∈ Mmi,j is the sum of all the pixel intensities (mi,j) which belong to the mask M, #M is the cardinality of M (i.e., the number of pixels which belong to M), and p is a penalty parameter. If p = 1, we have:\n\n\n\nHence, the maximization problem from Equation (1) is equivalent to the minimization represented in Equation (2) (when p = 1). From Equation (2), we can interpret the optimization performed by the genetic algorithm as finding “the largest circle which contains the least number of cell pixels.” Figure 2 shows the optimal circular region selected by the genetic algorithm when the input is the image from Figure 1.\n\nThe genetic algorithm selects the largest circle which contains the least number of masked pixels from the cell area mask. This optimal circle is the migration region.\n\nPercent of migration region covered by cells: We defined a metric to quantify the migration of MDA-MB-231 cells. This metric is Q, the percentage of migration pixels inside the migration region. We define a migration pixel as any pixel whose intensity value is greater than or equal to a threshold T. We chose T equal to 1.25 times the median pixel intensity of the migration region immediately after the stopper was removed (i.e., the green region in Figure 2). This is:\n\n\n\nWhere M* is the optimal circle defined by the three parameters ([cx*, cy*, r*] from Equation (1) and Equation (2)) and the set {M* ≥ T} includes all of the pixels inside M* with intensities higher than T .\n\nA previous version of this manuscript is available from bioRxiv: https://doi.org/10.1101/1305267\n\n\nResults and discussion\n\nTo test the hypothesis that MDA-MB-231 cells’ migration is reduced in the absence of AGR2, we designed an experiment (utilizing the cell migration assay described in the Methods section) with 5 experimental conditions: a positive control (untreated cells), a negative control (wells where the stopper was not removed), cells treated with 10nM of Taxol (a non-cytotoxic dose level which prevents cell migration but does not promote cell death8), cells treated with a 1:50 dilution of AGR2-ab to inactivate eAGR2, and with a 1:50 dilution of IgG, a control antibody (Ctrl-Ab) which does not affect cell migration. Representative images from these conditions (i.e., replicate 1) are shown in Figure 3 (top).\n\nRepresentative images (replicate 1) of each condition are shown (top). 10nM of Taxol and the 10µg/mL of the H10 peptide show similar levels of migration inhibition compared to the positive and negative controls. Our metric (bottom) allows us to quantify the qualitative results (top).\n\nFor the untreated case and the control peptide we observe increased migration, with 46±2 (mean ± standard error of the mean) percent of the migration region covered in the untreated case and 42±1.7 percent of the migration region covered in the control antibody case. We fail to reject the null hypothesis that these two are the sample means from the same distribution (p value of 0.084). In the Taxol case, 21±1.9 percent of the migration region is covered. We reject the null hypothesis that the mean of the Taxol population and the mean of the untreated case are sample means from the same distribution (p value of 8.16e-5). Similarly, for the AGR2-Ab case, 13±3.2 percent of the migration region is covered. We reject the null hypothesis that the mean of the AGR2-Ab population and the mean of the untreated case are sample means from the same distribution (p value of 2.19e-5). Not only do we confirm the hypothesis that MDA-MB-231 cells’ migration is reduced in the absence of AGR2, but our method allows for reproducible quantification of these qualitative observations. Furthermore, the algorithms used to compute Q requires a single input from the user (a string with the names of the control experiments) and it can run on a desktop machine with Matlab (R2015a-2018a and above) installed.\n\n\nConclusions and future work\n\nWe have designed and implemented a pipeline for quantifying cell migration in vitro. It is worth noting that this metric may not discern between cell motility and proliferation, hence in order to use it to estimate parameters for a mechanistic model, a parameter estimator such as CellPD9 should be used to estimate parameters of processes which decouple proliferation and motility10,11. However this metric is robust to image noise, open source, replicable, replicable and user friendly. In particular, we show that H10 aids in the reduction of migration of MDA-MB-231 cells by blocking sAGR2. This pipeline can be expanded to various cancer cell lines and model systems.\n\n\nData availability\n\nRaw, unedited microscope images from this analysis are available from the project GitHub repository: https://github.com/edjuaro/cell-migration-quantification\n\n\nSoftware availability\n\nThe source code used throughout this manuscript can be accessed in the public GitHub repository: https://github.com/edjuaro/cell-migration-quantification.\n\nArchived source code at time of publication can be found here: http://doi.org/10.5281/zenodo.132392312\n\nThis code is released under the permissive MIT license.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the USC Center for Applied Molecular Medicine, the National Institutes of Health (Physical Sciences Oncology Center [5U54CA143907] for Multi-scale Complex Systems Transdisciplinary Analysis of Response to Therapy (MCSTART), and [1R01CA180149]), the Breast Cancer Research Foundation, the USC James H. Zumberge Research and Innovation Fund, and a USC Provost’s PhD fellowship.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nJones DH, Nakashima T, Sanchez OH, et al.: Regulation of cancer cell migration and bone metastasis by RANKL. Nature. 2006; 440(7084): 692–696. PubMed Abstract | Publisher Full Text\n\nBrychtova V, Mohtar A, Vojtesek B, et al.: Mechanisms of anterior gradient-2 regulation and function in cancer. Semin Cancer Biol. 2015; 33: 16–24. PubMed Abstract | Publisher Full Text\n\nGarri C, Howell S, Tiemann K, et al.: Identification, characterization and application of a new peptide against anterior gradient homolog 2 (AGR2). Oncotarget. 2018; 9(44): 27363–27379. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKani K, Malihi PD, Jiang Y, et al.: Anterior gradient 2 (AGR2): blood-based biomarker elevated in metastatic prostate cancer associated with the neuroendocrine phenotype. Prostate. 2013; 73(3): 306–315. PubMed Abstract | Publisher Full Text\n\nFritzsche FR, Dahl E, Pahl S, et al.: Prognostic relevance of AGR2 expression in breast cancer. Clin Cancer Res. 2006; 12(6): 1728–1734. PubMed Abstract | Publisher Full Text\n\nPrice JT, Tiganis T, Agarwal A, et al.: Epidermal growth factor promotes MDA-MB-231 breast cancer cell migration through a phosphatidylinositol 3′-kinase and phospholipase C-dependent mechanism. Cancer Res. 1999; 59(21): 5475–5478. PubMed Abstract\n\nJuarez EF, Garri C, Ghaffarizadeh A, et al.: Quantification of Cancer Cell Migration with an Integrated Experimental-Computational Pipeline. bioRxiv. Cold Spring Harbor Laboratory. 2017. Publisher Full Text\n\nZhang D, Yang R, Wang S, et al.: Paclitaxel: new uses for an old drug. Drug Des Devel Ther. 2014; 8: 279–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJuarez EF, Lau R, Friedman SH, et al.: Quantifying differences in cell line population dynamics using CellPD. BMC Syst Biol. 2016; 10(1): 92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimpson MJ, Treloar KK, Binder BJ, et al.: Quantifying the roles of cell motility and cell proliferation in a circular barrier assay. J R Soc Interface. 2013; 10(82): 20130007. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTreloar KK, Simpson MJ, McElwain DL, et al.: Are in vitro estimates of cell diffusivity and cell proliferation rate sensitive to assay geometry? J Theor Biol. 2014; 356: 71–84. PubMed Abstract | Publisher Full Text\n\nJuárez E: edjuaro/cell-migration-quantification: Pre-publication release (Version 1.0). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1323923"
}
|
[
{
"id": "37224",
"date": "29 Aug 2018",
"name": "Mohammed El-Kebir",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors introduce a tool for identifying the migration region in microscopy images from an in vitro experiment for metastasis in cell lines. Mathematically, the problem is to find the largest circle that contains the fewest cells. There are a couple of issues/inconsistencies in this paper that I will outline below.\nWhy do you need to train the tool on a negative control?\nPlease specify which parameters are being learned? Are you learning the value of parameter p? Please be specific.\n\nMore motivation is needed\nIs the Oris cell migration assay the only assay where a circular stopper is used? How often is this assay used? Please describe this so that the reader can understand if your tool is widely applicable.\n\nDescription of related work is missing\nIn computational geometry, there is a problem that is called minimum enclosing circle (MEC). Please describe how and if your problem is different. (I'm not convinced it is - as there is a weighted version of MEC.)\n\nTool needs a name\nRight now the tool is called code.m. Please give it a name.\n\nNotation\nPlease don't use #M to denote the cardinality of M -- rather use |M|.\nEquation (1): don't use c_i, c_j this clashes with m_{i,j}. Use c_x and c_y. Equation (2): don't use M in arg min. For consistency with (1) use c_x, c_y, r.\nWhat is the relevance of p=1? Is this what you used in your experiments? If so, no need for this parameter in (1).\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? No\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
},
{
"id": "38365",
"date": "18 Sep 2018",
"name": "Assaf Zaritsky",
"expertise": [
"Reviewer Expertise Computational cell dynamics Quantitative cell biology Cell migration High throughput phenotyping Computer vision application"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn their Software Tool Article “Quantification of cancer cell migration with an integrated experimental-computational pipeline”, Juarez et al present a software to quantify circular monolayer migration assay. The assay itself is quite basic and widely used, although mostly not with circular geometry, which is used as an assumption by the segmentation algorithm. Multiple algorithms and tools exist for this task and the authors should compare their methods to alternatives. Quite a few details are missing making the algorithm very hard to read and interpret.\nSpecific major concerns / revisions:\nThe authors missed to cite a significant body of work methods for quantification of monolayer cell migration. Also, the experimental assay should be put into context – there are many alternatives assays to perform monolayer migration, Oris is just one example. As a “Software Tool” article, I am expecting the authors to benchmark using a ground truth annotated dataset and compare its performance to at least one of the existing alternatives. Here is one easy-to-use optional method, Gebäck et al1 (that does not use the information about the circular geometry). The authors missed to discuss the limitations of their method. Specifically, the assumption that the pattern is circular does not fit most experimental settings. Missing technical details:\nThe description of the algorithm is not clear to me. Is there a first stage of segmentation (thresholding?). Second stage of using the binary mask an input to optimize a fit to a circle as the initial geometry (is it the negative control?) using a genetic algorithm (GA). Third stage where segmentation is performed by simple thresholding and Q is calculated in relation to the initial circle as a measure to migration? This is very hard to interpret from the text. A major revision is essential. No details are provided for the GA. Many (most?) of the readers will not even know what is a GA. Moreover, there are practically no implementation details provided in the text.\n\nI am not certain from the text whether m_ij values are binary (0/1) or the actual pixel intensity. In the latter case, the assumptions used to define the function to optimize with the GA do not necessarily hold as they are heavily dependent on the values and variability of the pixel’s intensity in the background and foreground. The sentence “we can interpret the optimization performed by the genetic algorithm as finding “the largest circle which contains the least number of cell pixels.” is not necessarily true when using the raw pixels intensities and has to at least be discussed - how was this equation derived? what were the assumptions? how to set the value of the parameter p? I would like to see the complete algorithm and how the equation was derived in any case. What statistical tests were performed? Why do we see only 4 replication per experimental condition when the experiments were performed in a 96-well plate.\n\nOther revisions and suggestions:\n“A key aspect of metastasis is cell migration1” - this association of cell migration and metastasis is not established, definitely not through a 2D monolayer migration assay. Also, bone cancer (ref #1) is very different than the model used for this study (breast cancer cell line). Monolayer cell migration assays are quite prevalent and a more relevant argument as motivation can be articulated. Overall, I suggest to tune-down the relevance to cancer throughout this article. I would suggest to switch the order of the 2nd and 3rd paragraphs in the introduction. The focus of this article is the software tool. The experimental perturbation (AGR2) can come later (or better, not at all in the introduction). In the same paragraph, why mention solely the AGR2 perturbation and ignore the Taxol-perturbation. The authors should also mention somewhere that Taxol is a drug that inhibit microtubules. “Our pipeline aids in the verification of well-established hypotheses” – it is the other way around: the known perturbation that is known to impair migration can be used to validate the method. Typos / grammer:\n“ZeoissObserver.Z1” “First, the script first trains” “If the user desires change the format of the images”\n\nMissing detail:\n“and applying a series of morphological operations” “We then used a genetic algorithm to determine the coordinates of the center and the radius of a circle”\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1296
|
https://f1000research.com/articles/7-1009/v1
|
05 Jul 18
|
{
"type": "Research Article",
"title": "Acute hip fracture surgery anaesthetic technique and 30-day mortality in Sweden 2016 and 2017: A retrospective register study",
"authors": [
"Caroline Gremillet",
"Jan G. Jakobsson",
"Caroline Gremillet"
],
"abstract": "Background: Hip fractures yearly affect 1.6 million patients worldwide, often the elderly with complex comorbidity. Mortality following surgery for acute hip fracture is high. The high mortality rate is multifactorial; high age, comorbidities and complication/deterioration in health following surgery. Whether the anaesthesia technique affects the 30-day mortality rate has been studied widely without reaching a consensus. The primary aim of this study was to determine anaesthetic techniques used in Sweden and their impact on the 30-day mortality rate in the elderly, who underwent acute hip fracture surgery. Other aims were to study the impact of age, gender, ASA class, fracture type and delay in surgery on the 30-day mortality rate. Methods: Data from 13,649 patients ≥50 years old who had undergone acute hip fracture surgery and been reported to Swedish perioperative register (SPOR) between 2016 and 2017 were analysed.\n\nResults: The most commonly used anaesthetic technique was neuraxial anaesthesia (NA; 11257, 82%), followed by general anaesthesia (GA; 2190, 16%) and combined general and neuraxial anaesthesia (CA; 202, 1.5%) out of the 13,649 studied. The 30-day mortality rate was 7.7% for the entire cohort; GA 7.8%, NA 7.7% and CA 7.4%. Mortality was higher in elderly patients, those with a high ASA class, pertrochanteric fracture and males.\n\nConclusions: The present study showed that NA is by far the most common anaesthetic technique for acute hip fracture surgery in Sweden. However, the anaesthetic technique used during this type of surgery had no impact on the 30-day mortality rate of patients. Increasing age, ASA class and male gender increased the 30-day mortality.",
"keywords": [
"acute hip fracture",
"anaesthetic technique",
"neuraxial anaesthesia",
"spinal",
"epidural",
"general anaesthesia",
"30-day mortality"
],
"content": "Introduction\n\nHip fractures yearly affect 1.6 million patients worldwide and the incidence is raising, often the elderly with comorbidities1. There are annually approximately 17,500 patients with hip fracture in Sweden, the majority being females and the elderly http://rikshoft.se - /rikshoft_rapport2016.pdf. The search for safe and effective anaesthetic techniques for the management of the elderly patient with fracture is still on-going. There are several techniques possible, all with various benefits and potential negative effects. Neuraxial techniques (spinal and epidural) have the benefit in avoiding the need for airway management and only minor effects on cerebral function. However, blood pressure may drop, which is associated with spinal bupivacaine, and there are data showing a drop in blood pressure being a major risk factor2. Neuraxial anaesthesia and oral anticoagulants is also a matter of discussion3. Delay surgery to await the anticoagulant elimination may not be optimal4. The most recent meta-analysis has not been able to show any clear benefit comparing neuraxial and general anaesthesia5,6.\n\nThe aim of the present study was to assess the choice of main anaesthetic technique for acute hip fracture surgery in patients ≥50 years old and the impact of main anaesthetic technique on the 30-day mortality in Sweden. The primary outcome was the impact of anaesthetic technique, general vs. neuraxial, on 30-day mortality. Secondary outcomes were effects of age, sex, ASA class, fracture type and surgery within and after 24 hours on the 30-day mortality.\n\n\nMethods\n\nThis is a retrospective register study. Ethical permission for the study was obtained from The Regional Ethical Review Board in Stockholm (Dnr: 2017/1915-31; approved 2017-11-08, Annika Sandström). Patient informed consent is not required for register studies in accordance with Swedish research regulations.\n\nThe Swedish Perioperative Register (SPOR) data for January 1st 2016 and December 31st 2017 was reviewed. A diagnosis of acute hip fracture (fracture on the femur as collum fracture (S72.0), pertrochanteric fracture (S72.10) and subtrochanteric fracture (S72.2)), age above 50 years, emergent surgery and information around 30-day mortality was inclusion criteria for analysis.\n\nThe data-sheets retrieved from SPOR for the study analysis were based on the above inclusion criteria and SPOR had helped to categorise anaesthesia into three groups: neuraxial anaesthesia with and without sedation (NA); general anaesthesia (GA); and combined general and neuraxial anaesthesia (CA).\n\nAll data is presented as mean and standard deviation. Category data is presented as frequencies and presented as numbers and percent. Difference in mortality was studied by Chi-square test. Continuous variables were analysed by ANOVA and Student-t-test. A p-value < 0.05 was considered statistically significant. Odds ratio and confidence intervals non-adjusted and adjusted were calculated for the primary study variable and the main confounding factors. This is a retrospective register study; thus, no power analysis has been conducted. All statistical analyses were performed using IBM® SPSS Statistics® for Macintosh version 24 (Armonk, New York, USA) and Microsoft Excel © 2017 version 16.9.\n\n\nResults\n\nA total of 13,649 patients were included in the analysis (Figure 1); 4,601 males and 9,048 females with a mean age of 82 ± 9.6 years. Patients’ demographics are presented in Table 1.\n\nAge is presented in years. For age, results are presented as mean (SD), for age subgroups results are presented as number of patients (percentage). For all other categories results are presented as number of patients (percentage between rows) (percentage between columns). P-value with 95% CI.\n\nAbbreviations: GA = general anesthesia, CA = combined general plus neuraxial anesthesia, NA = neuraxial anesthesia, deceased = 30-day mortality, col = collum femoris fracture, per = pertrochanteric fracture, sub = subtrochanteric fracture, unknown = missing data on variable, ASAPS = American society of Anesthesiologists physical status.\n\nNA (spinal, epidural and combine spinal/epidural) was the most common anaesthetic technique used (82.5% of patients), GA was used in 16% and CA in 1.5% of pateints. Mean age was similar between the anaesthetic techniques studied, the proportion of age class 75–84 years and >85 years was however higher among NA compared to GA (79 vs 75%; p<0.0001). Sex was evenly distributed: 64 and 67% of GA and NA were female patients, respectively. Collum type fracture was the dominating fracture 56 and 54% of GA and NA patients, respectively. ASA class 3 was the most common functional class with more than 50% of all patients. The proportion of ASA classes 3–5 was higher among GA compared to NA (73 vs 59%; p<0.0001).\n\nThe 30-day mortality for the entire study cohort was 7.7%, with no significant difference between the three anaesthetic techniques studied (GA 7.8%, CA 7.4% and NA 7.7%; Table 1).\n\nMost patients had surgery within 24 hours and there was no difference in delay to surgery between anaesthetic techniques (Table 2). Duration of anaesthesia, surgery or PACU stay was similar for GA and NA, but somewhat longer CA. There was no clear difference in registered blood loss except for the CA group of patients (Table 2).\n\nTime to surgery, anesthesia time, surgery time and PACU time are presented as means in hours:minutes. Blood loss are presented as means in milliliters.\n\nAbbreviations: PACU = post anesthesia care unit, GA = general anesthesia, CA = combined general plus neuraxial anesthesia, NA = neuraxial anesthesia\n\nThe 30-day mortality was higher among males (10.6%) compared to females (6.2%) and increased for each age class; from 2% among 50–64 years old patients to 11.6% in patients above 85 years of age (see Table 3). There was also significant difference in 30-day mortality between fracture type and with increasing ASA class (Table 3). The odds ratio for mortality in relation to anaesthetic technique did not change when adjusted for age, sex, type of fracture and ASA class (Table 4). There was no difference in 30-day mortality between patients that had surgery with 24-hours or later; however the number of patients having surgery beyond 24-hours was small (Table 5). No differences were seen in duration of anaesthesia, surgery or PACU stay between patients that died compared to survive at day-30 (Table 5).\n\nAge is presented in years as mean (SD). Age was categorized into subgroups and results are presented as number of patients (percentage). For all other categories results are presented as number of patients (percentage between rows) (percentage between columns). P-value with 95% CI.\n\nAbbreviations: GA = general anesthesia, CA = combined general plus neuraxial anesthesia, NA = neuraxial anesthesia, deceased = 30-day mortality, Collum = collum femoris fracture, unknown = missing data on variable, ASAPS = American society of Anesthesiologists physical status.\n\nCombined anaesthesia, age class 50-64, female collum fracture and ASA 1 was set as reference.\n\nAbbreviations: GA = general anesthesia, CA = combined general plus neuraxial anesthesia, NA = neuraxial anesthesia, deceased = 30-day mortality, Collum = collum femoris fracture, unknown = missing data on variable, ASAPS = American society of Anesthesiologists physical status.\n\nPerioperative times are calculated as means and presented as hours:minutes. Blood loss is calculated as means and presented as milliliters. P-value with 95% CI.\n\nAbbreviations: PACU = post anesthesia care unit, h = hours.\n\n\nDiscussion\n\nWe found NA being by far the most common anaesthetic technique used for acute hip surgery in patients above 50 years of age. However, anaesthetic technique did not impact the 30-day mortality in this retrospective register study in patients having surgery for acute hip fracture. The 30-day mortality increased with age and ASA class. The 30-day mortality was also higher in males as compared to females, fracture type also impacted mortality (pertrochanteric fracture was associated to higher mortality).\n\nOur results are in line with previous studies suggesting that anaesthetic technique per se does not have a major impact on mortality2,3. Our overall mortality is also in line with the mortality described in a recent study from the US, including 107,317 hip fracture patients. That study found a 30-day mortality of 8.5%7. Our mortality rate is however somewhat higher than that described by Neuman et al. in study published in 2014 from New York8. This study was likewise unable to show any difference in 30-day mortality between general and regional anaesthesia. They did however find a 0.6 day shortened hospital stay in the spinal/epidural group of patients.\n\nThere are several limitations of this study. This is a retrospective register study, data derived from the relatively new Swedish perioperative register, SPOR-register9. Registers are dependent on input and data-management, and we are aware that a number of patients were excluded in the analysis of outcome due to missing information. It should also be acknowledged that there are numerous potential alternative anaesthetic techniques for hip fracture surgery. We merely investigated, sorted into three main techniques: neuraxial, general and combined anaesthesia. Peripheral blocks and light anaesthesia/sedation may indeed be an option10,11. We have not considered this surgical technique in the present study.\n\nThere are without doubt huge differences in the surgical trauma between merely a screw fixation and a joint prosthesis. We did not explicitly study the impact on anticoagulation, or patients having anticoagulation therapy. A recent paper from the US did not find major differences in complications or death when comparing cohorts of patient without and with anticoagulation therapy; patients having anticoagulation therapy more commonly received GA (84 vs 62%)12. Combined technique was associated with longer perioperative times and more blood loss than GA and NA. This may be an effect that combined spinal and epidural anaesthesia was chosen for more complex procedures; however, this is merely speculation. Tight haemodynamic control maintaining blood pressure and heart rate within minimal deviation from preoperative values have been suggested to have a major impact, and studies assessing its effect are on their way13. Optimising haemodynamics by ultrasound monitoring may also facilitate the perioperative course14. Temperature control is also of importance15. We cannot comment on the anaesthetic protocol performed in the patients included in this study or be more explicit about what drugs were used, nor the handling of any deviation in vital signs. The available register-data does unfortunately not contain information on quality of postoperative care, the occurrence of delirium, postoperative pain and nausea in sufficient fashion for analysis. The postoperative course, mobilisation, ambulation, intake of food and drink, discharge from hospital should indeed be assessed in future studies. Active rehabilitation and physiotherapy is of huge importance16.\n\nAge, comorbidities and increased ASA class are known risk factors for complications after hip fracture surgery17,18. Nutritional status, malnourishment, as well as obesity, may also have an effect in increasing risk for complications19. The International Fragility Fracture Network has recently provided extensive guidelines based on consensus20. Still further studies are indeed warranted to improve the understanding on how to best care for elderly patients with acute hip fracture.\n\n\nConclusion\n\nWe found in this retrospective SPOR study that neuraxial anaesthesia was by far the preferred anaesthetic technique in Sweden for acute hip fracture surgery in patients’ aged 50 years or more. However, anaesthetic technique (general vs. neuraxial vs. combined) per se did not have any influence on the 30-day mortality in this fragile patient group. Age above 75 years, ASA class 4 and 5, male gender and pertrochanteric fracture was more frequent among patient that died within the 30-days following surgery. Further studies are warranted determining the anaesthetic impact on morbidity and mortality following high risk orthopaedic surgery.\n\n\nData availability\n\nThe data has been retrieved from the Swedish Perioperative Register (SPOR). This is a national database supported by the The National Board of Health and Welfare, Swedish Society for Anaesthesia & Intensive Care and Swedish Association of Local Authorities and Regions and data is thus protected. The data can be retrieved by request from SPOR (http://www.spor.se/) following Ethical Review board approval via application (https://www.epn.se/en/start/).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported by the Department of Anaesthesia & Intensive Care, Danderyds Hospital. No external funding was provided.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nDhanwal DK, Dennison EM, Harvey NC, et al.: Epidemiology of hip fracture: Worldwide geographic variation. Indian J Orthop. 2011; 45(1): 15–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhite SM, Moppett IK, Griffiths R, et al.: Secondary analysis of outcomes after 11,085 hip fracture operations from the prospective UK Anaesthesia Sprint Audit of Practice (ASAP-2). Anaesthesia. 2016; 71(5): 506–14. PubMed Abstract | Publisher Full Text\n\nCappelleri G, Fanelli A: Use of direct oral anticoagulants with regional anesthesia in orthopedic patients. J Clin Anesth. 2016; 32: 224–35. PubMed Abstract | Publisher Full Text\n\nGinsel BL, Taher A, Whitehouse SL, et al.: Effects of anticoagulants on outcome of femoral neck fracture surgery. J Orthop Surg (Hong Kong). 2015; 23(1): 29–32. PubMed Abstract | Publisher Full Text\n\nSmith LM, Cozowicz C, Uda Y, et al.: Neuraxial and Combined Neuraxial/General Anesthesia Compared to General Anesthesia for Major Truncal and Lower Limb Surgery: A Systematic Review and Meta-analysis. Anesth Analg. 2017; 125(6): 1931–1945. PubMed Abstract | Publisher Full Text\n\nVan Waesberghe J, Stevanovic A, Rossaint R, et al.: General vs. neuraxial anaesthesia in hip fracture patients: a systematic review and meta-analysis. BMC Anesthesiol. 2017; 17(1): 87. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcIsaac DI, Wijeysundera DN, Huang A, et al.: Association of Hospital-level Neuraxial Anesthesia Use for Hip Fracture Surgery with Outcomes: A Population-based Cohort Study. Anesthesiology. 2018; 128(3): 480–491. PubMed Abstract | Publisher Full Text\n\nNeuman MD, Rosenbaum PR, Ludwig JM, et al.: Anesthesia technique, mortality, and length of stay after hip fracture surgery. JAMA. 2014; 311(24): 2508–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChew MS, Mangelus C, Enlund G, et al.: Swedish Perioperative Register. Surgery was successful--but how did it go for the patient? Experiences from and hopes for the Swedish Perioperative Register. Eur J Anaesthesiol. 2015; 32(7): 453–4. PubMed Abstract | Publisher Full Text\n\nJohnston DF, Stafford M, McKinney M, et al.: Peripheral nerve blocks with sedation using propofol and alfentanil target-controlled infusion for hip fracture surgery: a review of 6 years in use. J Clin Anesth. 2016; 29: 33–9. PubMed Abstract | Publisher Full Text\n\nAlmeida CR, Francisco EM, Pinho-Oliveira V, et al.: Fascia iliaca block associated only with deep sedation in high-risk patients, taking P2Y12 receptor inhibitors, for intramedullary femoral fixation in intertrochanteric hip fracture: a series of 3 cases. J Clin Anesth. 2016; 35: 339–345. PubMed Abstract | Publisher Full Text\n\nLott A, Haglin J, Belayneh R, et al.: Does Use of Oral Anticoagulants at the Time of Admission Affect Outcomes Following Hip Fracture. Geriatr Orthop Surg Rehabil. 2018; 9: 2151459318764151. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoppett IK, White S, Griffiths R, et al.: Tight intra-operative blood pressure control versus standard care for patients undergoing hip fracture repair - Hip Fracture Intervention Study for Prevention of Hypotension (HIP-HOP) trial: study protocol for a randomised controlled trial. Trials. 2017; 18(1): 350. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCanty DJ, Heiberg J, Yang Y, et al.: Pilot multi-centre randomised trial of the impact of pre-operative focused cardiac ultrasound on mortality and morbidity in patients having surgery for femoral neck fractures (ECHONOF-2 pilot). Anaesthesia. 2018; 73(4): 428–437. PubMed Abstract | Publisher Full Text\n\nGurunathan U, Stonell C, Fulbrook P: Perioperative hypothermia during hip fracture surgery: An observational study. J Eval Clin Pract. 2017; 23(4): 762–766. PubMed Abstract | Publisher Full Text\n\nMorri M, Forni C, Marchioni M, et al.: Which factors are independent predictors of early recovery of mobility in the older adults' population after hip fracture? A cohort prognostic study. Arch Orthop Trauma Surg. 2018; 138(1): 35–41. PubMed Abstract | Publisher Full Text\n\nFlikweert ER, Wendt KW, Diercks RL, et al.: Complications after hip fracture surgery: are they preventable? Eur J Trauma Emerg Surg. 2017; 1–8. PubMed Abstract | Publisher Full Text\n\nFolbert EC, Hegeman JH, Gierveld R, et al.: Complications during hospitalization and risk factors in elderly patients with hip fracture following integrated orthogeriatric treatment. Arch Orthop Trauma Surg. 2017; 137(4): 507–515. PubMed Abstract | Publisher Full Text\n\nZhang JC, Matelski J, Gandhi R, et al.: Can Patient Selection Explain the Obesity Paradox in Orthopaedic Hip Surgery? An Analysis of the ACS-NSQIP Registry. Clin Orthop Relat Res. 2018; 476(5): 964–973. PubMed Abstract | Publisher Full Text\n\nWhite SM, Altermatt F, Barry J, et al.: International Fragility Fracture Network Delphi consensus statement on the principles of anaesthesia for patients with hip fracture. Anaesthesia. 2018; 73(7): 863–874. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "36430",
"date": "26 Jul 2018",
"name": "Bengt Nellgård",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFirst of all it is interesting that the authors use the newly started SPOR registry. 1. The paper needs language improval. I have at least 50 changes in abstract, introduction methods and results sections. Write in past sense etc Use neutral Words when describing results 2. Patients less than 65 years are normally not included in studies on hip fracture as they are normally a different entity. Trauma and pathological fractures!!! Are they included? More statistics; GA group is; 2190; C is 202 and; NA is 11247. Can they really get results when comparing the Groups ? 3. Have they excluded pathological fracture? Reoperations? 4.In results and figures p values are not clear!!! Does pertrochanteric fractures have higher motrality rate? 5 Do they have any results on cemented prothesis in cervical fractures? Mortality rate. 6 Time to surgery; Cut off at 24 h. What do they know about delay 24-36h which is considered ok in f.ex. UK? 7 ASA is a crude preoperastive scale, not capturing low hemoglobin, dementia, malignancy and living conditions. Nottingham hip fracture score captures these please comment 8. Discussion; are there any previous reports from Scandinavia or Sweden addressing the topic? The routine in Sweden seems to be neuraxial anesthesia. This is not the case f.ex. in the USA. Discuss differences?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3865",
"date": "01 Aug 2018",
"name": "Jan Jakobsson",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Responses from the authors;Dear Referee,Thank you for valuable comments.First of all, it is interesting that the authors use the newly started SPOR registry.1. The paper needs language improval. I have at least 50 changes in abstract, introduction methods and results sections. Write in past sense etc Use neutral Words when describing resultsResponse; We will revise language and use past sense and neutral wordings of results.2. Patients less than 65 years are normally not included in studies on hip fracture as they are normally a different entity. Trauma and pathological fractures!!! Are they included? More statistics; GA group is; 2190; C is 202 and; NA is 11247. Can they really get results when comparing the Groups ?Response; The focus of our study was to assess the impact of anaesthetic technique on mortality among elderly, patient 65 and older. Different age limits have been used. We limited our analysis to the 65 and 65+ age as pathophysiology reasonably is different; both fracture type/cause and patients’ general health/fragility. The traditional Chi-2-tests should compensate for different in group size and we do believe that bust unadjusted and adjusted results are statistically sound.3. Have they excluded pathological fracture? Reoperations?Response; No: All hip fracture undergoing surgery with general, or neuraxial anaesthesia aged 65 and 65+ are included regardless of cause; we have not subgroup patients on trauma energy or bone density or similar.4.In results and figures p values are not clear!!! Does pertrochanteric fractures have higher mortality rate?Response; The mortality did differ within each fracture cohort, it was highest among pertrochanteric (8.4%) and lowest among the collum fracture patients (7.1%) We are not able to comment on cause of death, or whether the pertrochanteric patients had more extensive surgery.5 Do they have any results on cemented prothesis in cervical fractures? Mortality rate.Response; No we have no data related to surgical technique e.g. use of cement.6 Time to surgery; Cut off at 24 h. What do they know about delay 24-36h which is considered ok in f.ex. UK?Response; As opposed to the findings in this study, some have found a significantly higher risk of 30-day mortality for surgery later than 24 hours (41, 42). Some studies even suggest an increased mortality when surgical delay is more than 12 hours (38, 39) while other suggest 48 hours (36, 43). In a study adjusting for potential confounders, no difference in mortality was found in patients receiving surgery within 3 days as compared to those above 3 days (44). Swedish guidelines advise surgery within 36 to 48 hours and suggest adequate care of the patient and competent staff as equally crucial, although surgery within 24 hours is recommended (5). The results of this study should be interpreted with caution considering only 4% of the patients waited more than 24 hours for surgery. There are differences in the studies potentially explaining difference in result, such as variation in characteristics and size of study population, country and time-period for collection of data and outcome definitions.7 ASA is a crude preoperastive scale, not capturing low hemoglobin, dementia, malignancy and living conditions. Nottingham hip fracture score captures these please comment,Response; Most valid comment, we did not use the Nottingham score, and there is without doubt several patient factors that may have contributed to outcome.8. Discussion; are there any previous reports from Scandinavia or Sweden addressing the topic? The routine in Sweden seems to be neuraxial anesthesia. This is not the case f.ex. in the USA. Discuss differences?Response; The aim of the study was to use the PSOR register to assess what anaesthetic techniques that are used and whether we from retrospective data could see any difference in 30-day mortality between anaesthetic techniques used. We are not aware of any previous Swedish study assessing anaesthetic techniques independent impact on 30-day mortality."
}
]
},
{
"id": "36429",
"date": "02 Aug 2018",
"name": "Colin F. Royse",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review examines a specific question of whether general or neuraxial anaesthesia for fractured hip surgery affects 30 day mortality. The paper is well written with appropriate methods and analysis.\nThe deficiency of the paper is that it only addresses 30-day mortality and not any metric of quality of survival and quality of recovery. However, these variables may not be available in the SPOR.\nThe data source is not open, but can be obtained with permission from the Swedish Perioperative Registry. Add to the narrative review\nThe study identifies that the majority of fractured hip surgery is performed under spinal anaesthesia in Sweden. Only around 20% of patients undergo general anaesthesia. This could introduce bias. However, the groups appear well matched and the sample size remains large enough for meaningful comparisons. Propensity matching would have increased the fidelity of comparisons but was not performed. The research adds to the current literature identifying that there is no difference in mortality according to the type of anaesthetic administered.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1009
|
https://f1000research.com/articles/7-1286/v1
|
14 Aug 18
|
{
"type": "Method Article",
"title": "Recursive module extraction using Louvain and PageRank",
"authors": [
"Dimitri Perrin",
"Guido Zuccon",
"Guido Zuccon"
],
"abstract": "Biological networks are highly modular and contain a large number of clusters, which are often associated with a specific biological function or disease. Identifying these clusters, or modules, is therefore valuable, but it is not trivial. In this article we propose a recursive method based on the Louvain algorithm for community detection and the PageRank algorithm for authoritativeness weighting in networks. PageRank is used to initialise the weights of nodes in the biological network; the Louvain algorithm with the Newman-Girvan criterion for modularity is then applied to the network to identify modules. Any identified module with more than k nodes is further processed by recursively applying PageRank and Louvain, until no module contains more than k nodes (where k is a parameter of the method, no greater than 100). This method is evaluated on a heterogeneous set of six biological networks from the Disease Module Identification DREAM Challenge. Empirical findings suggest that the method is effective in identifying a large number of significant modules, although with substantial variability across restarts of the method.",
"keywords": [
"Network biology",
"Module identification",
"Community detection",
"DREAM challenge"
],
"content": "Introduction\n\nBiological functions emerge from interactions at the molecular level. For instance our circadian clock relies on the interactions between a large number of genes and proteins1,2, and many cancer types are typically associated with specific genetic3 and epigenetic4 modifications. Unsurprisingly, biological networks such as protein-protein interaction (PPI) or regulatory networks therefore have a high degree of modularity (a measure of strength of the division of the network into subgroups, or clusters, called modules in our context) where the ‘modules’ often correspond to genes or proteins that are involved in the same biological functions. Diseases are also rarely associated with a single gene: disease genes have a high propensity to interact with each other, forming disease modules5. The identification of these disease modules is a valuable tool to identify disease pathways, but also to predict other disease genes.\n\nThis task is sometimes also known as community detection or graph clustering. This is a well established problem in network science. A large number of methods exist (see e.g., 6), but there was a lack of common evaluation on relevant biological networks.\n\nThe Disease Module Identification DREAM Challenge aimed to comprehensively assess module identification methods across six diverse, unpublished molecular networks7. Participating teams were tasked to predict disease-relevant modules both within individual networks (subchallenge 1) and across multiple, layered networks (subchallenge 2). The modules were defined as non-overlapping subsets containing 3 to 100 nodes. This is not a graph partition task, as not all nodes necessarily have to be assigned to a module.\n\nIn this article, we detail our solution for subchallenge 1. Next, we introduce the six networks and how we preprocessed them, then we describe our recursive algorithm, and discuss its performance across each network.\n\n\nMethods\n\nThe human molecular networks used in the challenge are described in the challenge overview paper7. For convenience, we summarise their main characteristics in Table 1. On top of capturing different types of biological information, they also vary in terms of size, link density and structural properties.\n\nFor the duration of the challenge, networks were only provided in anonymised form, without any gene names, details on the underlying data or how the networks were constructed. In the experiment in this article, we also considered networks in their anonymised form.\n\nWhile protein interaction and homology networks, for instance, are obviously very different in nature, we opted to develop a method that could be applied to any network, independently of its type (although some preprocessing, described next, may be required, along with network-specific parameter tuning). This was because of the constraints of the challenge, in terms of both time and limited number of submissions.\n\nTo have a method that works across network types, we decided to focus only on undirected networks. We also assumed that edge weights are in the range [0, 1]. Most networks in the challenge satisfy these requirements; pre-processing was applied to the remaining networks.\n\nNetwork 3 is a directed network and as such needed to be converted to an undirected representation. This was achieved by simply assigning to all undirected <u,v> edges the average of the weights of the directed (u,v) and (v,u) edges (see Figure 1).\n\nNetworks 3 and 6 required normalisation of their weights. This was achieved by dividing all the original weights in each network by the maximum weight in that network.\n\nThese standardised networks are used as an input to our method. In what follows, any mention of a network refers to its standardised version.\n\nThe core of our method is the greedy Louvain algorithm8. This algorithm is a well-established method for community detection in networks6, it is applicable to weighted networks, and it provides better modularity maxima than other available greedy techniques6. In addition, the algorithm is computationally efficient and even large networks can be analyzed in reasonable runtime.\n\nThe algorithm starts by creating communities of size 1 where each node in the network forms a community. Then the algorithm proceeds by executing two steps. In the first step, the algorithm attempts to assign a node v to a community of a neighbor u, such that the modularity of the partition is increased. This process is repeated for as long as the modularity can be improved. This process generates an initial partition of the network. In the second step of the algorithm, each community of the partition is treated as a supernode. Supernodes are connected if at least one edge exists between nodes of each community the supernodes represents. Once this second step is concluded, the algorithm iterates and stops when the modularity cannot increase anymore.\n\nAs part of our methods, we rely on the implementation of Louvain (v0.2) by Blondel et al.8. The Louvain algorithm does not explicitly identify which modularity criterion is required: indeed, the algorithm can be instantiated using a number of modularity criteria. Their implementation supports ten modularity criteria; in all our submissions we used the default Newman-Girvan criterion9.\n\nBy default, in the Louvain algorithm, the initial partition assigns each node to a module that contains only the node itself. This creates a lot of variability in the results, which we reduced by modifying the algorithm. An idealised module is similar to a clique: it would contain nodes that are highly connected to other nodes, which are highly connected to similar nodes, etc. In other words, a node is important if it is linked to other nodes that are important. This closely matches the intuition of the PageRank algorithm developed to score web pages10. PageRank has been widely used in settings other than web search, including in bioinformatics11. Our solution is therefore to calculate the PageRank for each node of the network, and to create an initial partition where each node is allocated to the module corresponding to its highest-scored neighbor (or itself, if that neighbor is scored lower). This has the advantage of both reducing the variability and ‘seeding’ Louvain with a promising partition. Here, we used a modified PageRank score that takes into account the edge weights.\n\nGiven that the task was to find modules with 3 to 100 nodes, a simple approach could be to run Louvain, process layer 1 from the hierarchical output generated by the algorithm, and extract all modules with a suitable size. This is, of course, far from optimal: Louvain generates modules of any size, and there may be interesting modules ‘hiding’ in a module containing more than 100 nodes (which would not be a valid submission to the challenge).\n\nInitial tests on trimming or splitting large modules did not yield any useful results, so we implemented a recursive approach. For any network of size greater than k (for instance, k = 100), we run Louvain and process all modules. If a module contains between 3 and k nodes, it is saved. If it contains less than 3 nodes, it is discarded. If it contains more than k nodes, we extract the corresponding network and add it to a list of networks to which Louvain is recursively applied. The recursion terminates when this list is empty. PageRank-based initialisation is used for all recursion levels.\n\nThe overall algorithm is summarised in Figure 2.\n\nDuring the challenge, modules extracted from the anonymised networks were submitted to the online platform and evaluated by the organisers. Modules were scored using the Pascal tool for pathway scoring12. For each submission, the organisers would then communicate the number of significant modules that were identified for each of the six networks, but without providing any information on which submitted modules were significant. In the challenge leaderboard, submissions were ranked by the total number of significant modules identified. In this article, we analyse additional runs of our algorithm, evaluated locally using the code and GWAS data released by the organisers. Running the evaluation locally allows us to know which modules are significant.\n\nThe two parameters of our algorithm are the network being processed, and the value of the threshold k for the recursion. One configuration is a pair of a network and a threshold. For each configuration, we performed 10 runs of our algorithm.\n\n\nResults\n\nOn the final challenge leaderboard, our solution ranked 12th overall with 44 significant modules identified across the six networks (when the winning team found 60). Relative to other teams, it performed best on network 2 (10 modules found, best score 13) and network 3 (7 modules found, best score 9).\n\nHere, we analyse the performance over 100 new runs (10 per threshold value) for each network. The results are shown in Figure 3.\n\nWhite and red dots represent the median and mean values for each configuration, respectively. The blue line indicates our performance in the challenge leaderboard for that network, and the red line that of the best submission for that network.\n\nLouvain is non-deterministic, and even after initialising it using PageRank, the results for any given configuration have high variability. It is also worth noting that, for five of the six networks, there is at least one configuration for which our algorithm matches or outperforms the best system submitted to the challenge. Only network 6 leads to poor results. If we combine the best result for each network, we obtain a theoretical total of 81 significant modules, close to double our final score and 35% better than the best-performing solution in the challenge.\n\nFor most networks the performance is robust to changes of k, but there still appears to be an optimal configuration for each network. For networks 1, 3 and 4, our method produces better results with large values of k. For network 5, aiming for smaller modules produced better results, while for network 2 mid-range values of k are preferable.\n\n\nDiscussion\n\nThe results from these 600 additional runs show the potential of our approach. Under the same conditions as the challenge, our algorithm can match or improve the best results from the competition phase.\n\nEvaluating all the modules from a given solution against all the GWAS data using Pascal takes hours, and it is therefore not practical to use this evaluation to guide the creation of the modules. Even outside the challenge, it is more realistic for the extraction method to be purely driven by the network itself.\n\nHowever, now that the challenge is completed, it is possible to evaluate thousands of modules. Using this data, future work will focus on developing a module ‘score’ that would be a good predictor of whether that module is significant. If this can be achieved, we would then add a local optimisation step at the end of our algorithm, to fine tune the extracted modules.\n\nAnother direction for future work is to study the consensus between restarts. How many times do we identify the same modules, or does this correlate with their significance? We believe there is potential for voting/fusion approaches to extend our algorithm.\n\n\nConclusions\n\nNetwork-based approaches are an important tool in biomedical research, as they can lead to the identification of clusters of genes (modules) involved in the same molecular function or the same disease.\n\nIdentifying these modules is not trivial, and the Disease Module Identification DREAM Challenge was an important initiative to benchmark various approaches. We developed a recursive method based on the Louvain and PageRank algorithms, which performed reasonably well in the challenge.\n\nHere, we showed that this method can actually match or exceed the best results from the competition challenge. Further work will focus on exploiting the high variability between restarts, and on developing a module score that can guide optimisation of the identified modules.\n\n\nData availability\n\nThe dataset associated with the Disease Module Identification DREAM Challenge is available for registered participants at http://www.synapse.org/#!Synapse:syn6156761/wiki/400659.\n\nChallenge results and scoring scripts are available at http://www.synapse.org/#!Synapse:syn6156761/wiki/400647.\n\n\nSoftware availability\n\nSource code implementation for the recursive method presented in this article and used in the Disease Module Identification DREAM Challenge is available from GitHub: https://github.com/bmds-lab/DMI/tree/v0.1\n\nArchived source code at time of publication https://doi.org/10.5281/zenodo.133083513.\n\nSource code is available under a GLP 3.0 license",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors acknowledge the Disease Module Identification DREAM Challenge and Sage Bionetworks-DREAM for the provision of the six networks and of the evaluation method (code and GWAS studies).\n\nThe authors also wish to acknowledge the work from Charlie Shaw-Feather, who helped configure and run local Pascal evaluations. These evaluations also relied on computational resources and services provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia.\n\n\nReferences\n\nUkai-Tadenuma M, Yamada RG, Xu H, et al.: Delay in feedback repression by cryptochrome 1 is required for circadian clock function. Cell. 2011; 144(2): 268–281. PubMed Abstract | Publisher Full Text\n\nJolley CC, Ukai-Tadenuma M, Perrin D, et al.: A mammalian circadian clock model incorporating daytime expression elements. Biophys J. 2014; 107(6): 1462–1473. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcLean MH, El-Omar EM: Genetics of gastric cancer. Nat Rev Gastroenterol Hepatol. 2014; 11(11): 664–674. PubMed Abstract | Publisher Full Text\n\nPerrin D, Ruskin HJ, Niwa T: Cell type-dependent, infection-induced, aberrant DNA methylation in gastric cancer. J Theor Biol. 2010; 264(2): 570–577. PubMed Abstract | Publisher Full Text\n\nBarabási AL, Gulbahce N, Loscalzo J: Network medicine: a network-based approach to human disease. Nat Rev Genet. 2011; 12(1): 56–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFortunato S: Community detection in graphs. Phys Rep. 2010; 486(3–5): 75–174. Publisher Full Text\n\nChoobdar S, Ahsen ME, Crawford J, et al.: Open community challenge reveals molecular network modules with key roles in diseases. bioRxiv. 2018. Publisher Full Text\n\nBlondel VD, Guillaume JL, Lambiotte R, et al.: Fast unfolding of communities in large networks. Phys Rep. 2008; 2008(10): P10008. Publisher Full Text\n\nNewman ME, Girvan M: Finding and evaluating community structure in networks. Phys Rev E Stat Nonlin Soft Matter Phys. 2004; 69(2 Pt 2): 026113. PubMed Abstract | Publisher Full Text\n\nPage L, Brin S, Motwani R, et al.: The pagerank citation ranking: Bringing order to the web. Technical report. Stanford InfoLab, 1999. Reference Source\n\nIván G, Grolmusz V: When the Web meets the cell: using personalized PageRank for analyzing protein interaction networks. Bioinformatics. 2011; 27(3): 405–407. PubMed Abstract | Publisher Full Text\n\nLamparter D, Marbach D, Rueedi R, et al.: Fast and Rigorous Computation of Gene and Pathway Scores from SNP-Based Summary Statistics. PLoS Comput Biol. 2016; 12(1): e1004714. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerrin D: bmds-lab/DMI: Initial release (Version v0.1). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1330835"
}
|
[
{
"id": "37208",
"date": "10 Sep 2018",
"name": "Raghvendra Mall",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article proposes usage of a recursive form of Louvain method while including the PageRank of nodes to make graph partitions or detect biological 'modules' which are then evaluated through the DREAM Challenge evaluation tool PASCAL to determine how many of the identified modules were significant w.r.t. GWAS experiments conducted by the DREAM Challenge organisers.\nThe paper has been well written and all the technical details have been elaborated quite well by the authors, thereby suggesting that the method is reproducible and can be extended as per the suggestion of the authors to have a consensus disease module identification technique.\n\nThe authors provide a good introduction to Louvain method explaining its non-deterministic nature and limitations such as resolution limit for which it needs to be used in a recursive fashion to detection modules of length k ([3,100]). Moreover, they explain well how PageRank is used along with the Louvain method.\nThe only issue that I have is with the experiment section where the authors perform an additional 100 new runs and claim that they can obtain theoretically 81 significant modules. This is not correct way of evaluation as the authors are using the test set and tuning their hyper-parameters on the test set. In order to have a generic model, the authors can tune their model parameters on the training set and use the same for each test set network rather than tuning the results on test set. The authors do indicate this when they say that in future work they will focus on developing a module 'score' to predict if a module is significant or not.\n\nA major issue here is the non-deterministic nature of Louvain method which will result in different partitions every time the code is run. Hence the idea of having a 'consensus between restarts' is also interesting.\n\nFinally, it would have been better if the authors add information about the biological content of the modules that they have discovered and for which GWAS traits were the modules enriched in a given population. That analysis would complete the paper from a biological standpoint also.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
},
{
"id": "37207",
"date": "14 Sep 2018",
"name": "Alina Sîrbu",
"expertise": [
"Reviewer Expertise Complex systems modelling"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper presents a recursive algorithm to find modules in biological networks. The authors evaluate the modules using a DREAM dataset. The algorithm is promising and the paper is well written. The results are comparable and sometimes better than the best performance of the DREAM Challenge. I think some changes could improve the paper:\n\n1. In the definition of the algorithm, the intuition between the concept of modularity could be included, and maybe a high level definition of the chosen modularity index.\n2. “By default, in the Louvain algorithm, the initial partition assigns each node to a module that contains only the node itself. This creates a lot of variability in the results” Do you mean variability from one run to another?\n3. “For each configuration, we performed 10 runs of our algorithm.” Here you are evaluating each run and showing all 10 results. I wonder what would happen if one combined modules from different runs. To create an ‘ensemble’ module extractor. One could just pool together the modules found. This could be done over the 10 runs with the same K, but also maybe over runs with different k? Or even changing the modularity criterion and combining the results. And then selecting the most frequently found modules...\n4. For figure 3: is there information on how many significant modules are actually known in the networks? One could think of adding another line that would show the number of significant modules known as an upper bound for the performance.\n5. “If we combine the best result for each network, we obtain a theoretical total of 81 significant modules, close to double our final score and 35% better than the best-performing solution in the challenge”. How is this combination made?\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1286
|
https://f1000research.com/articles/7-677/v1
|
30 May 18
|
{
"type": "Research Article",
"title": "An opportunity for clinical pharmacology trained physicians to improve patient drug safety: A retrospective analysis of adverse drug reactions in teenagers",
"authors": [
"Andy R. Eugene",
"Beata Eugene",
"Beata Eugene"
],
"abstract": "Background: Adverse drug reactions (ADRs) are a major cause of hospital admissions, prolonged hospital stays, morbidity, and drug-related mortality. In this study, we sought to identify the most frequently reported medications and associated side effects in adolescent-aged patients in an effort to prioritize clinical pharmacology consultation efforts for hospitals seeking to improve patient safety.\n\nMethods: Quarterly reported data were obtained from the United States Food and Drug Administration Adverse Events Reporting System (FAERS) from the third quarter of 2014 and ending in the third quarter of 2017. We then used the GeneCards database to map the pharmacogenomic biomarkers associated with the most reported FAERS drugs. Data homogenization and statistics analysis were all conducted in R for statistical programming. Results: We identified risperidone (10.64%) as the compound with the most reported ADRs from all reported cases. Males represented 90.1% of reported risperidone cases with gynecomastia being the most reported ADR. Ibuprofen OR=188 (95% CI, 105.0000 – 335.000) and quetiapine fumarate OR=116 (95% CI, 48.4000 – 278.000) were associated with the highest odds of completed suicide in teenagers. Ondansetron hydrochloride OR=7.12 (95% CI, 1.59 – 31.9) resulted in the highest odds of pneumothorax. Lastly, olanzapine (8.96%) represented the compound with the most reported drug-drug interactions cases, while valproic acid OR=221 (95% CI, 93.9000 – 522.000) was associated with the highest odds of drug-drug interactions. Conclusion: Despite any data limitations, physicians prescribing risperidone in males should be aware of the high rates of adverse drug events and an alternative psychotropic should be considered in male patients. Further, patients with a history of pneumothorax or genetically predisposed to pneumothorax should be considered for an alternative antiemetic to ondansetron hydrochloride, due to increased odds associated with the drug and adverse event.",
"keywords": [
"adverse drug reactions",
"pharmacogenomics",
"psychiatry",
"precision medicine",
"pharmacogenomics",
"consult",
"mental health",
"teenagers"
],
"content": "Introduction\n\nWhen considering the aims of precision medicine, which has the underlying theme of maximizing therapeutic efficacy while minimizing adverse drug reactions, all physician and surgeon specialists provide an integral part in achieving the overall goal of this national endeavor (Manolio, 2016; Rasmussen-Torvik et al., 2014; Weinshilboum & Wang, 2017). Within medical specialties, clinical pharmacologists are vital for providing pharmacogenomics consultations to patients, other specialists, and in academic medicine to support the widespread implementation of pharmacogenomics and personalized medicine (Borobia et al., 2018; Moore, 2001; van der Wouden et al., 2017). Further, there is a growing need to provide more genomic medicine training modules to physicians in non-academic medical centers and rural clinics to support patient care decisions that address pharmacogenomics (McCauley et al., 2017). Within the United States, the American Board of Clinical Pharmacology (ABCP) accredits institutions that train clinical pharmacologists who consult on patient cases of drug-gene interactions (i.e. pharmacogenomics), drug-drug interactions (DDIs), drug-drug-gene interactions, toxicology cases, and the use of pharmacometric tools that provide Bayesian dosing support for therapeutic drug monitoring (TDM) (Aronson, 2012; Lewis & Nierenberg, 2007). However, when implementing hospital-based clinical pharmacology consultation units, aside from the established drug-gene guidelines, what is a reasonable approach for hospital pharmacologists to prioritize medications that are associated with the most reported adverse drug reactions that will improve hospital safety outcomes?\n\nIt is well-known that thousands of adverse drug reactions resulting in hospitalizations, increased lengths of hospital stay, and complications in patient management occur every year (Montané et al., 2018; Schmiedl et al., 2014). However, an approach that systematically addresses the top medications associated with the most reported adverse drug events, leading to a prioritization method for hospital pharmacologists to improve medication safety, is lacking (Davies et al., 2009; Shepherd et al., 2012). Several healthcare institutions in the United States have well established physician clinical pharmacology training programs, and integrate experiences into daily patient care, medical education, and research (Lewis & Nierenberg, 2007). The University of Chicago Hospital, in conjunction with the Indiana Institute for Personalized Medicine at the Indiana University, offers a clinical pharmacology consultation service that conducts pharmacogenomic consults to low-income patients and provide thorough documentation of its process in a 2016 publication (Eadon et al., 2016). Other ABCP-accredited institutions (e.g. Mayo Clinic, Johns Hopkins Hospital, Baylor College of Medicine, Cincinnati Children’s Hospital, and more) are training and leading the U.S. with various pharmacogenomics implementation strategies into routine patient care (see ABCP training programs).\n\nIn European nations and other countries with national health systems, the intrinsic goal of keeping all healthcare-related costs to a minimum and hospital re-admissions rates to a low, while still maintaining high-quality patient care, medical doctors specializing in clinical pharmacology who provide personalize medicine services are the norm (Borobia et al., 2018; Janković et al., 2016; Zagorodnikova Goryachkina et al., 2015). Contrastingly, the multi-payer model currently within U.S. hospitals, often preclude hospitals from absorbing the cost of a clinical pharmacologists who would translate pharmacogenetics guidelines into daily patient care.\n\nIt is important to note that hospitals with clinical pharmacology training programs are often ranked among the top ranked by U.S. news and world reports, even though clinical pharmacology is not one of the specialties being assessed for survival, patient safety, other care-related outcomes, and expert opinion (Harder et al., 2017). The service and commitment to the use of precision dosing in patient care, research, clinical pharmacology education, and pharmacogenomics implementation at these hospitals provide an overall compelling story. One of the most well recognized hospitals, globally, is the Karolinska Institutet in Stockholm, Sweden, due its awarding of the Nobel Prize in Physiology or Medicine. A recent article by the Karolinska Institutet discusses how the 50 year jubilee was recently celebrated in recognition of the establishment of their hospital’s Department of Clinical Pharmacology (Eichelbaum et al., 2018).\n\nIn the recent jubilee article, the Karolinska Institutet’s clinical pharmacologists detail the various established responsibilities of their clinical pharmacology services, which function as a division within the department of laboratory medicine today, and how they addressed this vital unmet clinical need within their medical center (Eichelbaum et al., 2018). In the U.S., a National Provider Identifier taxonomy code for clinical pharmacology is well established as 208U00000X; however, hospitals and state medical boards have not worked with state legislative officials to create a bill enacting medical licensure (e.g. independent, collaborative, or institutional) specifically for medical school graduates who enter directly into clinical pharmacology training. Yet, adverse drug reactions continue to affect outcomes and patient safety metrics each year (Burkhart et al., 2015; Montané et al., 2018).\n\nIt is important to realize that collaborative practice agreement laws between licensed physicians and pharmacists, physician assistants, and nurses are already in existence, but remains un-addressed for medical school graduates who choose only to specialize and train in clinical pharmacology. Therefore, if nothing is done, national implementation of precision medicine remains a challenge, due to not having enough trained medical doctors who focus on implementing pharmacogenomics into patient care and contribute to pharmacogenomics education (McCauley et al., 2017; Rosenman et al., 2017).\n\nWith this information as a background, the primary aim of this article is to determine the most frequently reported drugs and associated adverse drug reactions that are found within the FDA Adverse Events Reporting System (FAERS) that will aid in prioritizing efforts for clinical pharmacology consultation services. To do so we will access publically available FAERS data and report reporting frequencies and reporting odd-ratios of cases in an adolescent patient age group to avoid polypharmacy, albeit not exclusively in all cases.\n\n\nMethods\n\nThe United States Food and Drug Administration’s (FDA) Adverse Events Reporting System (FAERS) quarterly reports were downloaded, with dates ranging from the third quarter of 2014 to the third quarter of 2017. The ‘primaryid’ column, which represents a unique number of case sequence identifiers and manufacturer version number, were systematically linked as the primary field to other individual data files. Prior to our retrospective data analysis, we removed duplicate cases and selected reports classified from the adolescent age group alone. A source of bias in the FAERS quarterly files may be underreporting of drugs in particular people groups due to language. Institutional Review Board approval was not required due to the FAERS data being public de-identified patient cases.\n\nThe following are the data tables for each quarter (i.e. Q1-Q4) of the year (i.e. yy in the files): patient demographic and administrative information (DEMOyyQ1-Q4), drug/biologic information (DRUGyyQ1-Q4), the Medical Dictionary for Regulatory Activities (MedDRA) terms of reported adverse events (REACyyQ1-Q4), patient outcomes (OUTCyyQ1-Q4), report sources (RPSRyyQ1-Q4), drug therapy start dates and end dates (THERyyQ1-Q4), and finally the MedDRA terms coded for the clinical indications (INDIyyQ1-Q4). Links to data used can be found in Table 1.\n\nThe primary and secondary molecular target mappings of the top FAERS reported drugs were obtained from the compounds listed in the GeneCards database. We mapped the top ten genes using the GeneCards methodology, as has been previously reported (Stelzer et al., 2016; Weizmann Institute of Science, 2016).\n\nAll data homogenization and statistics were computed using R for Statistical Computing (version 3.3.2, Vienna, Austria) programming software (R Core Team, 2015). The top 15 indications, adverse drug reactions, and drugs are reported for the adolescent age group. The frequency tables were calculated based on: (number of drugs or adverse effect events) / (number of patient records) = drug or adverse events frequency. The reporting odds-ratios (OR), that scans across the medications under test, for a particular reported adverse drug event, are calculated using “Diarrhoea” as the control preferred term while “Hyperglycaemia”, “Pneumothorax”, and “Completed suicide” preferred terms were used for cases. The glm() function and binomial statistical family in R were used to conduct the logistic regression analysis. Odds-ratios are reported as: odds-ratio, lower-95% confidence-interval (CI), upper-95% CI, and p-value. A p-value of less than 0.05 was considered to be statistically significant.\n\n\nResults\n\nThe study included a total of 6,141 unique cases (male=2938, female=3021, undefined=184) for adolescent-aged patient records, out of a total of 22,784 unique pediatric cases. The compound with the most reported adverse drug reactions was risperidone (n=788) representing 10.64% of all reported cases. We found that of the reported risperidone cases, 90.1% (male=710, female=77, undefined=1) were reported in men alone. The top 10 reported genes associated with risperidone are (GeneCards score): DRD2 (dopamine receptor D2; 25.88), PRL (Prolactin; 25.59), HTR2A (5-hydroxytryptamine receptor 2A; 21.38), CYP2D6 (cytochrome P450 Family 2 Subfamily D Member 6; 20.21), HTR2C (14.32), ABCB1 (ATP binding cassette subfamily B member 1; 13.63), BDNF (brain derived neurotropic factor; 11.93), DRD3 (11.79), HTR1A (11.35), and CYP3A5 (11.03). Figure 1 illustrates the reporting frequencies of the top 15 reported drugs in adolescents.\n\nThe most commonly reported clinical indication was prophylaxis (12.82%), followed by acute lymphocytic leukemia (6.55%), and product used for unknown indication (6.44%). Figure 2a illustrates the reporting frequencies of the top fifteen reported clinical indications in the adolescent patient records from our study. The most reported adverse drug reaction was diarrhea (n=110, male=55, female=53, undefined=2) which represented 4.62% of the all of the reported cases. Following diarrhea, hyperglycemia (n=45, male=35, female=10) was the second most reported adverse drug event representing 4.43% of all reported cases. Figure 2b illustrates the reporting frequencies of the top fifteen reported adverse drug reactions for all adolescent cases.\n\nFrequencies for the top 15 reported (a) clinical indications and (b) adverse drug reactions (ADRs) in adolescent patient records identified in the FDA Adverse Events Reporting system ranging from the 3rd quarter of 2014 to the 3rd quarter of 2017.\n\nWe conducted logistic regression and reported odds-ratios (OR) by setting the control variable to the most commonly reported adverse event, diarrhea (4.62%) and tested the second most common ADR, hyperglycemia (4.43%) and subsequently a rather specific adverse drug reaction such as pneumothorax (3.91%, n=12, male=6, female=6), across the top twenty reported FAERS drugs in our study. We found that risperidone OR=214 (95% confidence interval [CI], 148 – 308, p= 5.60e-183) resulted in the highest odds of causing hyperglycemia and that tacrolimus/tacrolimus anhydrous (n=18, male=11, female=7) OR=1.17 (95% CI, 1.13 – 1.32, p=0.00129) also increased the odds of hyperglycemia. In the preceding analysis we identified methotrexate (n=437, male=196, female=225, undefined=16) OR=0.67 (95% CI, 0.577 – 0.778, p=1.60e-07) increased the odds of diarrhea. Further, we found that ondansetron hydrochloride (n=75, male=22, female=53) OR=7.12 (95% CI, 1.59 – 31.9, p=0.0104) resulted in the highest odds of causing pneumothorax among the top 20 most frequently reported drugs in our study.\n\nFigure 3a illustrates the top 10 ADR reporting frequencies of risperidone and Figure 3b provides a graphical view of the top 10 clinical indications for prescribing risperidone in the teenage population, as found in our results. The three most frequent ADRs associated with risperidone were reported to be gynecomastia (21.31%), abnormal weight-gain (10.68%), and obesity (7.25%). Further, the top three most frequently reported indications associated with risperidone were reported to be bipolar disorder (14.42%), attention deficit/hyperactivity disorder (12.51%), and depression (7.79%) in teenagers.\n\nFrequencies for the top 10 reported (a) adverse drug reactions and (b) reported clinical indications for risperidone in adolescent patient records identified in the FDA Adverse Events Reporting system ranging from the 3rd quarter of 2014 to the 3rd quarter of 2017.\n\nWe identified that the top three medications associated with drug-drug interactions (n=182, male=85, female=97) were olanzapine (8.96%), lorazepam (8.08%), and risperidone (5.36%). The odd-ratios for drugs reported to cause drug-drug interactions were found to be: valproic acid OR=221 (95% CI, 93.9000 – 522.000, p=6.20e-35), diazepam OR=170 (95% CI, 62.6000 – 463.000, p=7.82e-24), risperidone OR=71.0 (95% CI, 41.4000 – 122.000, p=4.17e-54), diphendydramine OR=46.1 (95% CI, 23.6000 – 90.000, p=3.56e-29), lorazepam OR=6.08 (95% CI, 4.0500 – 9.130, p=3.25e-18), and tacrolimus OR=4.28 (95% CI, 2.7300 – 6.710, p=2.45e-10); while amlodipine besylate OR=0.213 (95% CI, 0.1260 – 0.361, p=9.17e-09) was associated with diarrhea in this drug grouping. Figure 4a illustrates the frequencies for the top fifteen reported medications associated with drug-drug interactions.\n\nFrequencies for the top 15 reported (a) medications associated with drug-drug interactions (DDIs) and (b) completed suicide in adolescent patient records identified in the FDA Adverse Events Reporting system ranging from the 3rd quarter of 2014 to the 3rd quarter of 2017.\n\nIn assessing the odds-ratios for completed suicide, with a control of diarrhea, among the top twenty associated drugs with completed suicide (n=34, male=8, female=23, undefined=3), we found that ibuprofen OR=188 (95% CI, 105.0000 – 335.000, p=4.17e-70) resulted in the highest odds in adolescent cases. Further, we also found, in order of decreasing odds: quetiapine fumarate OR=116 (95% CI, 48.4000 – 278.000, p=1.43e-26), diazepam OR=86.0 (95% CI, 32.8000 – 225.000, p=1.15e-19), certirizine hydrochloride OR=59.1 (95% CI, 27.9000 – 126.000, p=2.33e-26), diphenhydramine OR=16.5 (95% CI, 8.6800 – 31.300, p=1.12e-17), and risperidone OR=4.48 (95% CI, 2.2700 – 8.820, p=1.49e-05) also were associated with increased odds of completed suicide within adolescent cases.\n\nContrastingly, hydroxyzine hydrochloride OR=0.0946 (95% CI, 0.0595 – 0.150, p=2.08e-23) and lorazepam OR=0.254 (95% CI, 0.1410 – 0.458, p=5.15e-06) were found to be associated with increased the odds for diarrhea, among top twenty compounds tested for completed suicide. Neither mirtazapine (p=0.980) nor herbals (p=0.990) were associated with increased odds of completed suicide, despite being listed as second and fourth in associated frequency. Similarly, acetaminophen/butalbital (p=0.996), acetaminophen/hydrocodone (p=0.996), alcohol (p=0.996), atorvastatin calcium (p=0.996), carbamazepine (p=0.996), fluoxetine hydrochloride (p=0.993), mirtazapine (p=0.980), paroxetine hydrochloride (p=0.995), and quetiapine (p=0.976, in contrast to quetiapine fumarate p=1.43e-26) did not increase odds of completed suicide, in our analysis. Figure 4b depicts the top fifteen drugs associated with completed suicide in adolescent patient records identified in the FAERS.\n\nThe top ten genes associated with ibuprofen, the compound with the highest odds for completed suicide, in this study were found to be PTGS2 (prostaglandin-endoperoxide synthase 2; 32.79), PTGS1 (22.74), ALB (albumin; 16.90), CYP2C9 (16.71), ILB (interleukin 1 beta; 15.59), OXAlL (OXAlL mitochondrial inner membrane protein; 14.28), IL6 (13.12), IL10 (12.83), CYP2C8 (12.64), and IL1RN (11.82).\n\n\nDiscussion\n\nIn this study, we chose the adolescent data, over the adult and elderly age groups, in efforts to minimize polypharmacy and to address the scope of the primary aim of our study. We identified pharmacogenes associated with drugs reported with adverse drug reactions and serves as a guide for clinical pharmacology services to prioritize medications in both the inpatient and outpatient care setting. We found that risperidone, a second-generation antipsychotic, with FDA-approval for managing schizophrenia, bipolar I disorder (acute manic/mixed), autistic disorder associated irritability, and Tourette’s syndrome in pediatrics represented the most reported drug in teenagers. We also found that two of the top three most frequently reported indications for risperidone, in adolescent cases, were indications which are not FDA-approved – attention deficit/hyperactivity disorder (12.51%) and depression (7.79%). Thus, suggesting the need for an increase in clinical pharmacology-trained physicians to help facilitate the pressing clinical need for precision medicine in psychiatry where diagnoses are stratified by biologically and physiologically relevant symptoms and then subsequent treatments are implemented based on drug pharmacokinetics (i.e. absorption, metabolism, distribution, and elimination) and pharmacodynamics (Boorstein & Historian, 2018).\n\nPrednisolone sodium succinate (3.81%), an anti-inflammatory glucocorticoid with various indications, and the anti-tumor necrosis factor-α (TNF- α) monoclonal antibody – infliximab (3.35%), were second and third in reporting frequency for adolescent patients, respectively. More precise dosing of infliximab may be achieved by pharmacologists using pharmacometrics methods that utilize measured plasma concentrations to recommend doses and dosing intervals to avoid sub-therapeutic concentrations.\n\nIn reference to our results suggesting increased odds of pneumothorax with ondansetron hydrochloride, patients who have a history of pneumothorax, or have conditions with known increased prevalence of pneumothorax (e.g. Marfan’s syndrome, Ehlers-Danlos syndrome, rheumatoid arthritis, poly- and dermato-myositis, ankylosing spondylitis, systemic sclerosis) should be managed with an alternative antiemetic. Further, additional studies should be pursued investigating the mechanisms connective tissue diseases and gene expression modulation with ondansetron.\n\nApproximately nine of the top fifteen reported drugs associated with DDIs, shown in Figure 4a, are prescribed in patients treated for mental health disorders. It may be that patients are experiencing the compounded effects of multiple prescriptions medications competing for the same hepatic biotransformation pathways, coupled with a loss-of-function SNP affecting the primary drug-gene pathway, rather than the later alone (Storelli et al., 2018). Therefore, these drug-drug-gene interactions resulting in phenoconversion, from a normal metabolizer to a poor or intermediate metabolizer, is an important consultation area for clinical pharmacologists. Similarly, this is another area where the use of TDM and Bayesian dosing support with pharmacometrics may be the most efficient method (Hiemke et al., 2011; Polasek et al., 2018).\n\nThe limitations and strength of the FDA Adverse Events Reporting System database is that the reports are voluntarily submitted by physicians, pharmacists, lawyers, patient consumers, and various healthcare professionals. Therefore, a limitation is that the complete medical histories are not factored into the analysis and the results are indicative of a subset of all patient adverse drug event experiences. However, despite the limitations, the FAERS database provides insight to the importance of publically available pharmacovigilence data that allows open analysis, discovery for potential repurposing of existing drugs, and provides a reporting mechanism for patients and caregivers to share their medication experiences (Burkhart et al., 2015; Oshima et al., 2018).\n\n\nConclusion\n\nIn addition to established pharmacogenomic guidelines, the FAERS database provides an important reference point for clinical pharmacologists to use when prioritizing medication safety consultations, pharmacogenomic education, and when seeking to improve hospital outcomes.\n\n\nData availability\n\nData used in this study is available from the United States Food and Drug Administration (FDA) website, with specific links provided in Table 1.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work\n\n\nReferences\n\nAronson JK: What do clinical pharmacologists do? A questionnaire survey of senior UK clinical pharmacologists. Br J Clin Pharmacol. 2012; 73(2): 161–169. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoorstein DJ, Historian A: Precision Psychiatry— Will Genomic Medicine Lead the Way? JAMA Psychiatry. 2018. Publisher Full Text\n\nBorobia AM, Dapia I, Tong HY, et al.: Clinical Implementation of Pharmacogenetic Testing in a Hospital of the Spanish National Health System: Strategy and Experience Over 3 Years. Clin Transl Sci. 2018; 11(2): 189–199. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurkhart KK, Abernethy D, Jackson D: Data Mining FAERS to Analyze Molecular Targets of Drugs Highly Associated with Stevens-Johnson Syndrome. J Med Toxicol. 2015; 11(2): 265–273. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavies EC, Green CF, Taylor S, et al.: Adverse drug reactions in hospital in-patients: a prospective analysis of 3695 patient-episodes. PLoS One. 2009; 4(2): e4439. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEadon MT, Desta Z, Levy KD, et al.: Implementation of a pharmacogenomics consult service to support the INGENIOUS trial. Clin Pharmacol Ther. 2016; 100(1): 63–66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEichelbaum M, Dahl ML, Sjöqvist F: Clinical pharmacology in Stockholm 50 years-report from the jubilee symposium. Eur J Clin Pharmacol. 2018; 74(6): 843–851. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarder B, Comarow A, Dougherty G: Methodology Updates for Best Hospitals 2017–18. U.S. News & World Report. 2017; (Accessed: 10 May 2018). Reference Source\n\nHiemke C, Baumann P, Bergemann N, et al.: AGNP consensus guidelines for therapeutic drug monitoring in psychiatry: update 2011. Pharmacopsychiatry. 2011; 44(6): 195–235. PubMed Abstract | Publisher Full Text\n\nJanković SM, Milovanović D, Zečević DR, et al.: Consulting clinical pharmacologist about treatment of inpatients in a tertiary hospital in Serbia. Eur J Clin Pharmacol. 2016; 72(12): 1541–1543. PubMed Abstract | Publisher Full Text\n\nLewis LD, Nierenberg DW: American Board of Clinical Pharmacology fellowship training and certification in clinical pharmacology: educational value and future needs for the discipline. Clin Pharmacol Ther. 2007; 81(1): 134–137. PubMed Abstract | Publisher Full Text\n\nManolio TA: Implementing genomics and pharmacogenomics in the clinic: The National Human Genome Research Institute’s genomic medicine portfolio. Atherosclerosis. Elsevier, 2016; 253: 225–236. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCauley MP, Marcus RK, Strong KA, et al.: Genetics and Genomics in Clinical Practice: The Views of Wisconsin Physicians. WMJ. 2017; 116(2): 69–74. (Accessed: 10 May 2018). PubMed Abstract\n\nMontané E, Arellano AL, Sanz Y, et al.: Drug-related deaths in hospital inpatients: A retrospective cohort study. Br J Clin Pharmacol. 2018; 84(3): 542–552. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoore N: The role of the clinical pharmacologist in the management of adverse drug reactions. Drug Saf. 2001; 24(1): 1–7. PubMed Abstract | Publisher Full Text\n\nOshima Y, Tanimoto T, Yuji K, et al.: EGFR-TKI-Associated Interstitial Pneumonitis in Nivolumab-Treated Patients With Non-Small Cell Lung Cancer. JAMA Oncol. 2018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPolasek TM, Tucker GT, Sorich MJ, et al.: Prediction of olanzapine exposure in individual patients using physiologically based pharmacokinetic modelling and simulation. Br J Clin Pharmacol. 2018; 84(3): 462–476. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team: R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. 2015. Reference Source\n\nRasmussen-Torvik LJ, Stallings SC, Gordon AS, et al.: Design and anticipated outcomes of the eMERGE-PGx project: a multicenter pilot for preemptive pharmacogenomics in electronic health record systems. Clin Pharmacol Ther. 2014; 96(4): 482–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRosenman MB, Decker B, Levy KD, et al.: Lessons Learned When Introducing Pharmacogenomic Panel Testing into Clinical Practice. Value Health. 2017; 20(1): 54–59. PubMed Abstract | Publisher Full Text\n\nSchmiedl S, Rottenkolber M, Hasford J, et al.: Self-medication with over-the-counter and prescribed drugs causing adverse-drug-reaction-related hospital admissions: Results of a prospective, long-term multi-centre study. Drug Saf. 2014; 37(4): 225–235. PubMed Abstract | Publisher Full Text\n\nShepherd G, Mohorn P, Yacoub K, et al.: Adverse drug reaction deaths reported in United States vital statistics, 1999-2006. Ann Pharmacother. 2012; 46(2): 169–75. PubMed Abstract | Publisher Full Text\n\nStelzer G, Rosen N, Plaschkes I, et al.: The GeneCards Suite: From Gene Data Mining to Disease Genome Sequence Analyses. Curr Protoc Bioinformatics. 2016; 54(1): 1.30.1–1.30.33. PubMed Abstract | Publisher Full Text\n\nStorelli F, Samer C, Reny JL, et al.: Complex Drug-Drug-Gene-Disease Interactions Involving Cytochromes P450: Systematic Review of Published Case Reports and Clinical Perspectives. Clin Pharmacokinet. Springer International Publishing. 2018; 1–27. PubMed Abstract | Publisher Full Text\n\nvan der Wouden CH, Cambon-Thomsen A, Cecchin E, et al.: Corrigendum to: Implementing Pharmacogenomics in Europe: Design and Implementation Strategy of the Ubiquitous Pharmacogenomics Consortium (Clinical Pharmacology & Therapeutics, (2017), 101, 3, (341-358), 10.1002/cpt.602). Clin Pharmacol Ther. 2017; 102(1): 341–358. Publisher Full Text\n\nWeinshilboum RM, Wang L: Pharmacogenomics: Precision Medicine and Drug Response. Mayo Clin Proc. 2017; 92(11): 1711–1722. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeizmann Institute of Science: GeneCards. v4.1 Build 30. 2016. Reference Source\n\nZagorodnikova Goryachkina K, Burbello A, Sychev D, et al.: Clinical pharmacology in Russia-historical development and current state. Eur J Clin Pharmacol. 2015; 71(2): 159–63. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "35025",
"date": "26 Jun 2018",
"name": "Antonio J. Carcas",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe aim of this study is to know the most frequently reported drugs associated to adverse drug reactions (and DDIs) in order to prioritize the efforts for clinical pharmacology consultation services. I find this manuscript interesting.\nComments:\nJust to clarify, please confirm that adolescent age is considered from 12 to 17 yo.\nI agree about the utility of knowing the most frequently reported drugs associated to adverse drug reactions; the author should mention some other similar studies previously published1-4. I also like the concept that a better knowledge of the most frequent drugs producing AEs can drive prioritization of efforts for clinical pharmacology consultation services. However:\n- Authors should give a more detailed description of the design (case/non-case ?) and statistical methods allowing calculation of OR. - We should not disregard the potential of this analysis to rise hypothesis about the link between the AE, DDIs and drug PK and pharmacogenetics. A comment by the authors would be useful. - I also think that it would be useful for readers to provide a more specific comment about the relationship between pharmagenomics and AEs whose knowledge could improve drug safety. For example, CYP2D6 polymorphisms have been related to weight gain and hyperprolactinemia in patients with risperidone (including adolescents). On the other side, the association of ondansetron with pneumothorax could be a confounding by indication and not a true causal association.\nAlthough not new, it is interesting also the finding that frequently reported indications for risperidone were non-approved indications by FDA and that top fifteen reported drugs associated with DDIs are prescribed in patients treated for mental health disorders.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3887",
"date": "14 Aug 2018",
"name": "Andy Eugene",
"role": "Author Response",
"response": "Reviewer: Just to clarify, please confirm that adolescent age is considered from 12 to 17 yo. Response: Yes, in the methods section, I added a comment to specifically reflect the numerical age group for adolescents as (12 – 17 years-old). Reviewer: I agree about the utility of knowing the most frequently reported drugs associated to adverse drug reactions; the author should mention some other similar studies previously published1-4. I also like the concept that a better knowledge of the most frequent drugs producing AEs can drive prioritization of efforts for clinical pharmacology consultation services. However: Response: We thank you for the recommending the references from similar previously published efforts identifying adverse drug reactions in pediatric-aged patients. Therefore, we added 3 of the four references you recommended in this updated version of our research article. Lee WJ, Lee TA, Pickard AS, Caskey RN, Schumock GT: Drugs associated with adverse events in children and adolescents.Pharmacotherapy. 2014; 34 (9): 918-26 de Bie S, Ferrajolo C, Straus SM, Verhamme KM, Bonhoeffer J, Wong IC, Sturkenboom MC, GRiP network: Pediatric Drug Safety Surveillance in FDA-AERS: A Description of Adverse Events from GRiP Project.PLoS One. 2015; 10 (6): e0130399 Cliff-Eribo KO, Sammons H, Choonara I: Systematic review of paediatric studies of adverse drug reactions from pharmacovigilance databases.Expert Opin Drug Saf. 2016; 15 (10): 1321-8 So, we added a statement in the article: Further, we do acknowledge previously published efforts that provide insight to adverse drug reaction reporting in pediatric-aged patients (Lee et al. 2014; De Bie et al. 2015; Cliff-Eribo, Sammons, and Choonara 2016). Reviewer: Authors should give a more detailed description of the design (case/non-case ?) and statistical methods allowing calculation of OR. Response: We thank the reviewer for this comment and have added three sentences detailing the methodology using in R for calculation of the odds-ratios within the statistics sub-section in the methods of this research manuscript. The following expression details the equation for the reporting odds-ratio: OR = (drug-of-interest group with adverse event / control-drug group with adverse event) / (control-drug group with adverse event / control-drug group without an adverse event). The ‘control-drug group with adverse event’ is set to “Diarrhoea” due to that being the most reported adverse drug event and is most commonly reported in FDA reports in the adolescent age population of patients. Morever, as mentioned, the cases (e.g. drug-of-interest group with adverse event) were set to “Hyperglycaemia”, “Pneumothorax”, and “Completed suicide.” Reviewer: We should not disregard the potential of this analysis to rise hypothesis about the link between the AE, DDIs and drug PK and pharmacogenetics. A comment by the authors would be useful. Response: The association between prescription and non-prescription drug pharmacokinetics, drug metabolizing enzymes encoded by cytochrome P450 genes (e.g. CYP2B6, CYP2C19, CYP2D6, etc.) responsible for metabolism of these medicines, and toxicity due to excessively high blood drug levels (i.e. plasma concentrations) resulting in adverse-drug reactions is well established on the FDA’s Table of Pharmacogenomic Biomarkers in Drug Labelling (https://www.fda.gov/Drugs/ScienceResearch/ucm572698.htm) and the FDA’s Table of Substrates, Inhibitors and Inducers (https://www.fda.gov/drugs/developmentapprovalprocess/developmentresources/druginteractionslabeling/ucm093664.htm). Further, these known drug-gene interactions, drug-drug-interactions, and extrapolated drug-drug-gene interactions form the basis for the need of pharmacogenomics education within the implementation, education, and practice of Genomic Medicine by physicians, providing consultations by clinical pharmacology trained physicians, and educating various healthcare practitioners to improve safety of prescription medicines. The National Human Genomic Research Institute, of the National Institutes of Health, recently established a Pharmacogenomics Work Group within the Inter-Society Coordinating Committee for Practitioner Education in Genomics (ISCC) to address the need for addressing the pharmacogenomic education needs within clinical genomic medicine (https://www.genome.gov/27554614/intersociety-coordinating-committee-for-practitioner-education-in-genomics-iscc/). There is a need to increase the number of clinical pharmacology trained physicians in the United States to support the efforts of pharmacogenomics as is already implemented in many countries. Schools of Nursing, Physician Assistant studies, and Pharmacy have already begun integrating the most common drug-gene interactions; however, in due to the need for comprehensive medical care, medical students who select training in clinical pharmacology are essential for providing consultations within healthcare systems and in stand-alone clinics. Reviewer: I also think that it would be useful for readers to provide a more specific comment about the relationship between pharmagenomics and AEs whose knowledge could improve drug safety. For example, CYP2D6 polymorphisms have been related to weight gain and hyperprolactinemia in patients with risperidone (including adolescents). On the other side, the association of ondansetron with pneumothorax could be a confounding by indication and not a true causal association. Response: To confirm our findings that showed an increased odds of pneumothorax with ondansetron, we used the OpenVigil 2.1-MedDRA (version 2.1, https://www.is.informatik.uni-kiel.de/pvt/OpenVigilMedDRA17/search/) an online pharmacovigilence analysis tool developed by the Christian Albrecht University of Kiel, Germany. OpenVigil version 2.1 includes the FAERS data from 4th quarter of 2003 to the first quarter of 2018. The odansetron-pneumothorax association was confirmed using the OpenVigil analysis tool and confirmed that for the reported adolescent age-group cases, ondansetron increased the odds of pneumothorax: Relative Reporting Ratio (RRR) = 7.037 (95% CI, 2.600 – 19.0408), Proportional Reporting Ratio (PRR) = 7.291 (95% CI, 2.6924 – 19.745), and Reporting Odds Ratio (OR) = 7.346 (95% CI, 2.6899 – 20.0613). These results further confirm and strengthen our methodology used in this article. We went a step further and confirmed the increased odds of the ondansetron-pneumothorax association in all age groups using OpenVigil and found: RRR = 5.644 (95% CI, 4.5326 – 7.0289), PRR = 5.732 (95% CI, 4.6030 – 7.1386), and OR = 5.751 (95% CI, 4.6139 – 7.1677). To understand the pharmacokinetic-pharmacogenomic implications of prescribing risperidone in medical practice, a population pharmacokinetic study reported that relative to normal/extensive CYP2D6 metabolizers, CYP2D6 (*10/*10) poor metabolizers experience a 64% slower oral clearance rate, 72% slower absorption rate in the gastrointestinal tract, and a 53% slower clearance from the central compartment of risperidone to 9-hydroxyrisperidone metabolite compartment (Yoo et al. 2012). These are striking findings and these same CYP2D6 poor metabolizer’s experience a 3-fold increase in risperidone in the area under the concentration-time curve (i.e. AUC or drug exposure), when compared to normal metabolizers. To put this into perspective of clinically relevant drug-drug interactions in psychiatry where patients experience unnecessary adverse drug reactions, if a physician prescribes risperidone with either fluoxetine, paroxetine, quinidine, terbinafine, or bupropion (all strong CYP2D6 inhibitors) in a patient who is a normal/extensive CYP2D6 metabolizer, that patient will experience a greater than or equal to 5-fold increase in total risperidone exposure alone, and can be further referenced from the FDA’s Table of Substrates, Inhibitors and Inducers. Moreover, if this patient is has any loss-of-function CYP2D6 genotype, resulting in decreased ability to clear risperidone, the 5-fold increase in total risperidone drug exposure is further increased and could potentially lead to the reported risperidone adverse drug reactions (e.g. gynecomastia, abnormal weight-gain, and obesity). Therefore, caution should be used when associating risperidone withj any particular adverse drug reaction alone and should be properly assessed by reviewing the complete medical record by a physician with pharmacogenomics training or clinical pharmacologists."
}
]
},
{
"id": "35300",
"date": "16 Jul 2018",
"name": "Daniel D Hawcutt",
"expertise": [
"Reviewer Expertise Pediatric Pharmacology and pediatric pharmacogenomics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting retrospective review of ADR reports submitted to the US FDA for adolescents.\nThe strengths are that this population are rarely considered independantly, despite having their own health needs and (as noted by the authors) not having quite the severity of polypharmacy as older individuals. Also, the study considers pharmacogenomics related to the most commonly prescribed medicines, using genecards website as the data source.\nThere is a lot of useful data contained in this publication, however there are some aspects that I think could be improved by some clarifications:\n1) What does a genecard score mean for a gene? I know a link is given, but sentence or two giving an overview would help the reader.\n2) I am uncomfortable about the odds ratios (ORs) for the completed suicide that feature prominently in the results section of the main paper and the abstract (as they are very very large ORs). Are these describing young people who used the drug as the means to commit suicide, or committed suicide while incidentally using this medication (two very different populations). While I appreciated the lack of medical history mentioned in the discussion, I do worry that presenting these finding when compared to diarrhorea may make the drugs look more dangerous than they are (and I have indicated that professional statistical advice would be useful here to clarify this point - is diarrhoea the right comparitor, or should it be something else?).\n\n3) Phrases like \"need for precision medicine in psychiatry\" in the discussion sound as if they suggest the study has identified new (or overlooked) pharmacogenomic associations that a clinical pharmacologist could act on, but unless I have misunderstood, the study does not do this, it only highlights where areas of unmet pharmacogenomic need exist. The pharmacogenomic section could be removed from the paper and it would still be a good paper, but assuming it is kept, then I think it needs to clearer exactly what information this brings to a clinician.\nOverall, I enjoyed this paper, and it adds to this field.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3888",
"date": "14 Aug 2018",
"name": "Andy Eugene",
"role": "Author Response",
"response": "Reviewer: 1) What does a genecard score mean for a gene? I know a link is given, but sentence or two giving an overview would help the reader. Response: GeneCards uses an Inferred Functionality Score that provides an objective number that indicates the knowledge level about the functionality of human genes relative to the drug queried in the database by comparing the drug with all possible genes. The final results are ranked according to a relevance score and reported in the results section of this article. In our analysis, we mapped the top ten genes using the GeneCards methodology, as has been previously reported ( Stelzer et al., 2016; Weizmann Institute of Science, 2016). Reviewer: 2) I am uncomfortable about the odds ratios (ORs) for the completed suicide that feature prominently in the results section of the main paper and the abstract (as they are very very large ORs). Are these describing young people who used the drug as the means to commit suicide, or committed suicide while incidentally using this medication (two very different populations). While I appreciated the lack of medical history mentioned in the discussion, I do worry that presenting these finding when compared to diarrhorea may make the drugs look more dangerous than they are (and I have indicated that professional statistical advice would be useful here to clarify this point - is diarrhoea the right comparitor, or should it be something else?). Response: We thank you for this concern and the current study design with selecting of diarrhea as a control, or comparator, was thoroughly research by examining the frequencies of adverse drug reactions for medications in our exploratory analysis. Please see the updated version of the article for the corroboration of our results for risperidone using the OpenVigil 2.1-MedDRA (version 2.1, https://www.is.informatik.uni-kiel.de/pvt/OpenVigilMedDRA17/search/) an online pharmacovigilence analysis tool developed by the Christian Albrecht University of Kiel, Germany in the discussion section. The reason for the larger odds-ratios is due to the selection of the top 20 drugs ranked and implicated for the adverse drug reaction and not the entire database. Therefore, it is important to note that these are reporting odds-ratios for the top 20 drugs list in the specific adverse drug reaction. Reviewer: 3) Phrases like \"need for precision medicine in psychiatry\" in the discussion sound as if they suggest the study has identified new (or overlooked) pharmacogenomic associations that a clinical pharmacologist could act on, but unless I have misunderstood, the study does not do this, it only highlights where areas of unmet pharmacogenomic need exist. The pharmacogenomic section could be removed from the paper and it would still be a good paper, but assuming it is kept, then I think it needs to clearer exactly what information this brings to a clinician. Response: We thank the referee for the comment and re-wrote the sentence to better reflect the findings and scope of the paper. Thus, these findings highlight the unmet clinical need to increase the number of clinical pharmacology-trained physicians to serve as pharmacogenomic consultants and provide comprehensive patient care advancing genomic medicine (Boorstein & Historian, 2018)."
}
]
}
] | 1
|
https://f1000research.com/articles/7-677
|
https://f1000research.com/articles/6-2145/v1
|
18 Dec 17
|
{
"type": "Research Article",
"title": "Positive bias for European men in peer reviewed applications for faculty position at Karolinska Institutet",
"authors": [
"Sarah Holst",
"Sara Hägg",
"Sarah Holst"
],
"abstract": "Background: Sweden is viewed as an egalitarian country, still most of the professors are Swedish and only 25% are women. Research competence is evaluated using peer review, which is regarded as an objective measure in the meritocracy system. Here we update the investigation by Wold & Wennerås (1997) on women researcher’s success rate for obtaining a faculty position, by examining factors (gender, nationality, productivity, etc.) in applications for an Assistant Professorship in 2014 at Karolinska Institutet. Methods: Fifty-six applications, 26 Swedish and 21 women applicants, were scored both on merits and projects by six external reviewers. Additional variables, including grants and academic age, calculated as the number of years since PhD excluding parental or sick leave, were gathered. Productivity was assessed by calculating a composite bibliometric score based on six factors (citations, publications, first/last authorships, H-index, high impact publication). Results: Overall, academic age was negatively correlated with scores on merits, as assessed by peer review, although not reaching statistical significance. In men, associations between scores on merits and productivity (P-value=0.0004), as well as having received grants (P-value=0.009) were seen. No associations were found for women. Moreover, applicants with a background from the Middle East were un-proportionally found in the lowest quartile (Fisher exact test P-value=0.007). Conclusions: In summary, the gender inequality shown in peer review processes in Sweden 20 years ago still exists. Furthermore, a bias for ethnicity was found. In order to keep the best scientific competence in academia, more efforts are needed to avoid selection bias in assessments to enable equal evaluations of all researchers.",
"keywords": [
"equality",
"diversity",
"life science",
"peer review",
"bibliometry",
"faculty positions",
"multivariable analysis",
"principal component analysis"
],
"content": "Introduction\n\nThe key to promoting innovative research is a career system based on scientific competence, often assessed by peer review based on feasibility, novelty and significance of a research project in combination with assessing the merits of the applicant, regardless of gender, sexuality, ethnicity, religion, disability or age. However, the peer review process has been shown to be subjected to substantial bias1–3. Hence, the system of meritocracy is rather enforcing than reducing inequality and contributes to the uneven distributions of gender and ethnicity in academia.\n\nIn 1997 Wennerås and Wold2 concluded that women were less likely than men to be recruited to faculty positions in Sweden. Twenty years later, despite high standards in equality and diversity4, only 25% of the professors at Swedish Universities are women, and 23% have an international background, in spite of more than 50% of the doctoral students being women or students with other nationalities5. The increase of women professors is slow, and the Swedish government has made a new proposition with the goal of 50% of newly recruited professors to be women6. It is therefore of interest to see whether or not the same type of bias in peer review processes still exist in Sweden today.\n\nOver the last years, Karolinska Institutet (KI) has announced yearly around 10 junior faculty positions (equivalent to an Assistant Professorship) with salary for four years. At KI, there is not yet a full tenure track system; once the four year faculty position is ended you have to apply for continued funding to stay in the academic career track. At each level, the competition gets harder and many researchers fall out of the system. At each level, there are un-proportionally more women that disappear, referred to as the leaky pipeline. This is illustrated by the number of assessed and granted faculty positions at KI from 2011–2014 (Supplementary Figure 1). In 2011 and 2012, the proportion of assessed applications was equal between men and women, but not reflected by the proportion of granted applications; men had a higher success rate. For 2014, only applications passing the first bibliometric criteria were assessed (see Methods for details), women dropped out at an early stage and did not make it into the figure for comparison.\n\nThus, the aim of this investigation was to assess how applications submitted for Assistant Professorship positions at KI in 2014 were evaluated by peer review processes. A specific focus was made on diversity, where gender, ethnicity and academic age were among the variables studied. We further calculated a composite bibliometric score to analyze productivity among the applicants, and compared to the scores received by the external reviewers. In addition, an attempt to investigate whether influence from senior researchers at KI, research field, international experience and family situation mattered was made.\n\n\nMethods\n\nThe selection of applications for our study was based on the 2014 application process to become an Assistant Professor at KI. Eligibility criteria included a maximum academic age of seven years (number of years since PhD, excluding parental leave, clinical work or sick leave) and not having a permanent position at KI (e.g., technical staff or lab managers, which is often used as temporary solutions when postdocs cannot prolong their positions anymore). There were 150 applications submitted and 56 passed the first cut-off criteria of having a total journal impact factor of all publications >75 and were consequently sent for external review. The review panel consisted of six professors from other universities in Sweden (three men and three women). They were instructed to read the applications and score them based on 1) merits (publications and training) and 2) project plan (aim, novelty, methodology and feasibility). The scale ranged from 0-7 (0, insufficient; 1, bad; 2, weak; 3, good; 4, very good; 5, very good to excellent; 6, excellent; 7, outstanding). The total score of an application was the sum of both parts from all reviewers (maximum possible score on merits/project was 7 points * 6 reviewers = 42 and total was 2 * 42 = 84 for both parts), which gave a rank of the applicant in comparison to the other applicant´s scores. The applications were not blinded in any way and there was no information on how to be aware of, and deal with, biases from gender, ethnicity, age, etc. in the instructions sent to the reviewers.\n\nThe 56 applications read by the reviewers were assessed and discussed by both authors (SH and SH) according to different variables (Table 1). Undergraduate education was grouped into the following categories: 1) medical, 2) engineering, 3) science, 4) other. Ethnicity was based on the reported “mother tongue”, and information on children was found in the CV or from time deducted from research due to parental leave. Funding was reported in the CV and the total amount was calculated and divided into own funding as principal investigator (PI) and as co-PI. If the amount received was missing, it was estimated based on type of funding (postdoc fellowship, small project grant, travel grant, etc.) in relation to what the other applicants reported. International experience was judged as having done education or research for at least six months at any University outside of Sweden. A high-rank University experience was judged as having done education or research at any of the 10 top-ranked Universities according to the QS World University Rankings®, 2014/15 (Supplementary Table 1). Moreover, the number of supervised doctoral students as main or co-supervisor was counted. To be able to assess the KI network of the applicant, the number of women/men KI-affiliated references/instructors/mentors mentioned in the application was counted. The project plan submitted by the applicant was grouped into research field using the same division as done by the Swedish Research Council (Supplementary Table 2) and categorized into method used (Supplementary Table 3). Three applicants did not provide a project plan and were hence excluded from analysis wherever the project score was included.\n\nSD: standard deviation; PI: principal investigator; kSEK: thousands Swedish crowns.\n\nThe total number of publications and the number of first and last authorship positions were assessed from the publication list provided by the applicant. The number of high impact publications were defined as having lead authorship position (first or last) in any of the 30 top-ranked journals according to the Journal Citation Reports® 2014 (Supplementary Table 4). Total number of citations was reported in the CV as well as the H-index (h), which is defined as h number of publications with h number of citations. A composite bibliometric score was subsequently calculated corresponding to Wennerås & Wold2 by summarizing standardized values of: 1) total number of citations, 2) total number of publications, 3) number of first authorship publications, 4) number of last authorship publications, 5) H-index, and 6) high impact publication (yes or no).\n\nThe effect of having a broad network at KI was assessed using bibliometric parameters as follows. The applicants were divided into four groups based on quartiles (Q1-4) of the scores received on merits by the external reviewers. All KI researchers connected to the respective applicant were consequently pooled in these four groups, and stratified by the source of connection to the applicant: 1) PhD-supervisor, 2) postdoc-supervisor, 3) collaborator, and 4) used as reference. By advice and help of the University Library at KI, bibliometric parameters for each researcher was derived from verified publications (articles and reviews) available between 1995–2014 and presented as 1) Avg Pub = Average of the number of publications, 2) Cf = Average of the field normalized citation scores where high values indicate that several publications were highly cited compared to publications in the same research area, 3) Avg Perc Cf = Average of the field normalized citation percentile for the department of the researcher, 4) Share Top 5% = Proportion of the field normalized publications that belong to the 5% most highly cited in the world, 5) Cnormalized = Average of the normalized citation scores based on year and document type, but not field type, 6) Avg JIF = Average of the journal impact factors for the department of the researcher, and 7) Avg JCf = Average of the journal field normalized citation scores for the department of the researcher. The field normalized indicator is not calculated if the group had less than 50 publications during the analyzed period because of instability. The normalization procedure compensates for different citation patterns due to research area, publication year and article type. The bibliometric numbers for all described variables were collapsed in the four groups as we were only allowed to present data on group level, hence, no statistical analyses were performed and only descriptive results were presented.\n\nAll continuous variables were tested for normality and skewness, and log-transformed if skewed. Linear regression analysis was carried out in SAS 9.4 with PROC REG for each continuous variable as exposure, stratified by sex, with scores received on merits as outcome. The significance of the model was reported as trend. For binary variables, Fisher’s exact test was carried out using PROC FREQ on the different quartiles based on scores received on merits. Multivariable analysis was performed by step-wise regression using PROC PHREG procedure to scrutinize variables of importance for the outcome (scores received on merits) for different groups of applicants (men, women, Europeans, non-Europeans). The principal component analysis (PCA) used for pattern recognition analysis was done in the Soft Independent Modeling of Class Analogy (SIMCA 13, Umetrics®, Umeå, Sweden). The PCA is designed to extract and display the systemic variation in data sets and pre-process variables by scaling and mean centering in order to standardize weighting of each parameter. The first component in the PCA represents the largest variation in the data set, the second component the largest of the remaining variance, etc. The PCA creates a score plot showing the cluster of individuals in groups, and a loading plot identifying variables important for creating these clusters. The location of the individual in the score plot corresponds to the variable distribution in the loading plot. The PCA plots were re-generated using the plotly function in R for interactive online figures.\n\n\nResults\n\nThe average applicant was a man from Sweden or another European country with some international experience, who co-supervised PhD-students and had received grants; both as PI and as co-PI. The average bibliometric variables were 20 published articles, seven as first and one as last author, with about 400 citations (Table 1). In contrast, a successful top-ranked candidate, found in the first quartile (Q1) of the scores received on merits by the external reviewers, had received more funding, did their postdoc at high ranked Universities, supervised one PhD-student and published 22 articles (Figure 1).\n\nThe average of an applicant in Q1, hence a successful applicant, is illustrated in the figure. In brief this person would be a Swedish man with a science degree and a PhD in cell and molecular biology. The person would have spent a postdoc abroad at one of the top 10 universities in the world, and has an academic age (the time since PhD) of about four years. The person has been successful in retrieving grants as principal investigator (PI) of about 5 million Swedish crowns, has published 22 research articles with eight as first author. Moreover, this person does also have a good network of peers at KI, mostly men, has one PhD-student of his own and no children so far.\n\nTo explore the impact of different variables on the success rate, data were divided into quartiles based on the scores received on merits by the external reviewers (Table 2). Only two women were found in Q1, while the gender distributions in Q2-4 were almost equal. In men, univariate analysis revealed a positive association between scores received on merits and the composite bibliometric score (Trend test P-value=0.0004), while this was not true for women (P-value=0.84; Figure 2A). The association seen in men remained significant even after removing the top five applicants (data not visualised). Likewise, in Europeans, a positive association between scores received on merits and the composite bibliometric score was shown (P-value=0.0003), while not in non-Europeans (P-value=0.42; Figure 2B). The positive trend was also seen when comparing European men only (P-value=0.0004) to all other applicants (P-value=0.60; Figure 2C). Moreover, applicants with a background from the Middle East were un-proportionally found in the lowest quartile based on scores received on merits (Fisher’s exact test P-value=0.007). The benefit of having obtained grants was important for men, with an association with scores received on merits, as PI (P-value=0.03) and as co-PI (P-value=0.009). This was not true for women, although they obtained the same amount of funding overall. An international experience did not influence the score outcome unless it was spent at one of the top universities; border line significance was found in the Fisher’s exact test for top university experience grouped by scores on merits (P-value=0.058). There were no significant effects of having children or from the academic age on the score outcome, although both variables seemed to have an inverse correlation. Scores received on the project plan were significantly associated with scores received on merits, especially for women (P-value=0.0002), but also for men (P-value=0.045).\n\nSD: standard deviation; PI: principal investigator; kSEK: thousands Swedish crowns\n\nA productivity score (x-axis) was calculated for each applicant by equal weights of the following bibliometric parameters: 1) total number of citations, 2) total number of publications, 3) number of first author publications, 4) number of last author publications, 5) H-index, and 6) high impact publication with lead author position (yes or no). On the y-axis, the scores received by the external reviewers on the merits of the applicant were plotted. (A) For men, a clear association between productivity and merits was detected (P-value=0.0004). For women, on the contrary, there was no association found (P-value=0.84). (B) For applicants who came from Europe originally, an association between productivity and merits was detected (P-value=0.0003), while there was no association found for non-Europeans (P-value=0.42). (C) Finally, the combination of being male and from Europe was also found to have a strong association (P-value=0.0004), which was not seen in the other applicants (P-value=0.60).\n\nIn multivariate analysis, step wise regression was carried out in men and women separately to explore important factors for explaining the outcome. In men, the most contributing factors for a high score on the application was 1) the composite bibliometric score, 2) score based on project plan, and 3) grants as PI (all P-value<0.001). In women, the only variable that had any impact on outcome was score based on project plan (P-value=0.004).\n\nThe numbers of the KI-affiliated researchers for each quartile group of applicants were presented stratified on gender and the source of connection (Table 3). For PhD-supervisors, the numbers were fairly constant across all quartiles, although there were more men (n=6) than women (n=1) in Q1. The postdoc-supervisors in Q1 were only two, possibly reflecting that most applicants in Q1 did not stay at KI during their postdoc training. The number of reference persons were also lower in Q1 overall, and had higher numbers for men in Q2 and Q3, while Q4 was even for both genders. When looking at collaborators, there was an interesting gender difference observed. Men were about twice as likely being collaborators in Q1-3, on almost constant levels, compared to women. However, in Q4 the opposite was true, in which women were twice as likely being collaborators than men. A general interpretation would be that applicants in Q4 were more likely to be connected to women researchers while the opposite was true for Q1.\n\ngroupsize = Number of researchers within the cohort\n\nP = Number of verified Articles & Reviews during the analyzed timespan.\n\nCf* = Average of the Field Normalized Citation Scores for verified Articles & Reviews. High values indicate that several publications are highly cited compared to publications in the same research area, however distribution may be highly skewed.\n\nShare Top 5%* = The proportion of publications that belong to the 5% most highly cited publications in the world (field normalized). High values indicate that many of the publications are among the world’s most highly cited publications within that field.\n\nCnormalized = Average Normalized Citation Scores for verified Articles & Reviews. Normalization is done for publication year and document type, but not field type. Can be used in conjunction with Cf to distinguish effects of normalization of research area.\n\n*=Field normalized indicator. Is because of instability not calculated if the cohort has less than 50 publications during the analyzed period and it does not include publications published the current year -1. The normalization procedure compensates for different citation patterns due to research area, publication year and article type.\n\nCertain data included herein were derived from the Web of Science® prepared by THOMSON REUTERS®, Inc. (Thomson®), Philadelphia, Pennsylvania, USA: © Copyright THOMSON REUTERS® 2015. All rights reserved.\n\nThe average number of publications per researcher was constant across Q´s with ~80 for the collaborators and ~100 for the other groups (Table 3), with two exceptions; the PhD-supervisors in Q4 and the postdoc-supervisors in Q1 had about twice as many publications. The two postdoc-supervisors in Q1 published more than average, indicating that applicants in Q1 who stayed at KI chose successful researchers as supervisors. The same was true for the top 5% publications, where the Q1 group was generally better, especially considering the two postdoc-supervisors. However, in the totals of the field- and document normalized citation scores; Q2 out-performed the other groups, indicating that applicants in Q2 had a scientifically well performing network of researchers at KI, which were highly cited in their respective fields. The same pattern was seen in the normalized citation scores at departmental level, in which the Q2 group performed better than the Q1 group in two of three compared indicators (Supplementary Table 5).\n\nThe PCA was created for a visible observation of the relationships between the variables and the scores received on merits based on the characteristics of the applicants. The loading plot shows the distribution of the variables influencing the outcome of the merit scores (Figure 3A). The position of an applicant in a score plot corresponds to a high level in variables located in the same position in the loading plot, and a low level in variables located in the opposite position through origo. The first principal component explained 23% of the variance, and the second 15%. The first score plot shows the location of the applicants stratified by gender (Figure 3B) and the second by ethnicity (Figure 3C) in relation to the quartiles based on the scores received on merits (A-D). A corresponds to Q1, B to Q2 and so on. The third score plot illustrates the research field in relation to the method used in the project (Figure 3D).\n\nThe PCA is based on the variables assessed in applications for an Assistant Professorship position at Karolinska Institutet in 2014. The PCAs were created for a visible observation of the relationships between the variables and the scores received on merits by the external reviewers. The loading plot (A) shows the distribution of the variables and the closer together the more related they are. The location of an applicant in a score plot (B–D) corresponds to a high level in variables located at the same location in the loading plot and a low level in variables located at the opposite location through origo in the loading plot. The score plots show the location of the applicants in regard to (B) and ethnicity (C) in relation to the quartiles based on the score of the merits. The research field in relation to the method used in the project is seen in the last plot (D). Abbreviations: A=quartile 1, B= quartile 2, C= quartile 3, D= quartile 4. Int Exp=International Experience, High Imp=High Impact publications, Acad age=Academic age (years from PhD defense), High Rank=Post doc visit at a high ranked university (see Supplementary Table 1), First Auth=First Author publications, Last Auth=Last Author publications, Total Pub=Total number of publications, PI-Grant=Grants received as Principal Investigator, co-PI Gran=Grants received as Co-Principal Investigator, Main sup=Experience as main supervisor, Co-Sup= Experience as co- supervisor, Men KI-aff=Number of Man KI-affiliated researchers associated with the applicants, Women KI-aff=Number of Woman KI-affiliated researchers associated with the applicants. The other abbreviations are found in Supplementary Table 2 and Supplementary Table 3. The online version of Figures 3B–D are interactive. Clicking a data point will highlight individuals that share that variable both within and across score plots. For example, clicking a ‘woman’ data point highlights all women within the Gender score plot and all individuals in the Ethnicity and Research score plots who are women. Double click to reset the plot.\n\nIn the first score plot (Figure 3B), the applicants with the highest total points, Q1 (A), did not form a separate group but were mostly located in the upper right quadrant corresponding to high numbers in citations, h-index and first author publications. The applicants in Q2 (B) were located close to origo in the upper left quadrant corresponding to high impact publications and postdoc visits at high ranked universities, Q3 (C) were spread all over the plot and Q4 (D) were mostly located in the lower left quadrant corresponding to having children.\n\nIn the second score plot (Figure 3C), Swedish applicants were not located in a specific square of the PCA. The same was almost true for European applicants with the exception of only one European applicant in the lower right square, corresponding to experience as supervisor and receiver of previous grants. Noteworthy, the three applicants from the Middle East were found in the lower left quadrant, opposite to the quadrant where the highest ranked applicants were found.\n\nIn the last score plot, Figure 3D, projects in the research field of cell and molecular biology were found everywhere, although the majority of the applicants from Q1 (A) either had projects or methods in the research field of cell and molecular biology. The most heterogenity of research fields were found in the left upper quadrant corresponding to high impact publications and postdoc visits at high ranked universities in the loading plot.\n\n\nDiscussion\n\nIn this paper, we described the main characteristics of applicants for a junior faculty funded position at KI in 2014, and highlighted the desired variables for a successful candidate. We showed that men’s scores were positively associated with bibliometric measures and funding, which was not true for women. In addition, applicants with a Swedish or European background were more likely to receive higher scores.\n\nThe study is a thorough investigation of biases in peer review processes for junior faculty positions at KI. However, some limitations are warranted. The data were sub-selected from all the applications, because only one third of them were externally assessed. Therefore, the sample size is small and power is limited. For some variables, data were missing, and therefore imputation was done where possible.\n\nIn society today, the knowledge of perception due to social background, education, ethnicity, gender, religion, profession and country of residence is increasing. In academia, the consensus around the meritocracy system and the objectivity of peer review is being challenged and unconscious bias training have become popular7,8. Still, more work needs to be done; the significant gender bias exists even though the National Institute of Health (USA) changed their review process9. Already in 2008, the European Research Council (ERC) created a gender balance working group, but the systematic lower success rates for women remains10. Since 1997, when Wennerås and Wold published their article about gender bias, the research climate has changed2, but our study shows that gender bias in peer review processes in Sweden still exists, inflicting advancement in the academic career ladder for women. A data simulation of a corporate organization show that minor disadvantages at junior level were likely to become an impregnable lead at senior level11. Hence, if women were in majority at a low level in an organization and were just slightly disadvantaged, they only represent one third on the highest level. This is in line with the scenario of the leaky pipeline of women in academia. We suggest that much of the leak is attributed to the gender discrimination in the peer review processes along the academic track. A side note is our observation of the skewed gender distribution among the KI-affiliated researchers associated with the applicants; in Q1-Q3 there were twice as many men, while the opposite was true in Q4. Notably, the observation is strikingly similar to the distribution of men and women professors (3:1).\n\nMoreover, in 2014, faculty funding for Swedish universities resulted in an uneven distribution in which women scientists received 80 million SEK less per year than men12. A research career system built on mobility and rapid and vast publishing tend to impair the outcome for women researchers1, since women traditionally are more involved in family life. However, this seems to be more true in the early stages of the academic career13, meanwhile women with children become more efficient and are suggested to achieve better results than women without a family14. The PCA analysis demonstrated an inverse correlation between having children and scores received on merits, as a family often slows down the production speed, resulting in fewer publications15, shorter postdoc visits abroad and a higher academic age, resulting in less funding and more time spent on getting alternative funding, as commissioned research on short time contracts. In the long run, the production is further slowed down and an independent research platform delayed. The uncertainty combined with the necessity for economic stability either encourages these women to take on positions as lecturers, or leaving academia - both resulting in the leaky pipeline and a reduced number of women professors.\n\nAlso the masculine stereotyping related to leadership positions is negative16; the Swedish University of Agricultural Sciences concluded that qualified women did not think it was worth applying to a call for a professor launched in a way that only attracted men17. Similarly, a recent study in Science showed that stereotyping in higher levels also extends to ethnic underrepresentation in academia18, in line with our observation of Middle East applicants ended up un-proportionally in Q4.\n\nIn our bibliometric analysis of KI researchers connected to the applicants, the Q2 group had higher normalized citations scores, indicating well cited publications within their fields. Interestingly, Q2 was the only group with a majority of women applicants. It could be speculated that women applicants may have been higher scored if the quality of their publications had been assessed in field context. In other words, to overcome gender bias in publication rates, a shift from quantity to quality is warranted. Ingegerd Palmér, former Vice-Chancellor of Mälardalen University in Sweden, also concluded already in 2007 that women, despite fewer publications, were assessed equally to men in qualitative measures19. A similar conclusion was made by a gold medalist in the Athena Swan, at University of York, UK, accredited for their work on gender equality20. Women often reach the final evaluation process but are down prioritized when personal assessments of committee members are decisive. Researchers working in close collaboration with senior successful professors were referred to as “well-connected” if they were men, but “dependent” if they were women by committees at the Swedish Research Council21. Hence, many women researchers get stuck in a vicious circle, facing a different trajectory in terms of advancing on the academic ladder than men at similar positions13. In addition, women professors are reported to collaborate less with women at junior faculty positions compared to what male professors and male junior faculty do22. However, women that do survive in academia eventually catch up with men in research output.\n\nFor future directions, direct feedback on present funding applications would improve future ones. We also suggest a transparent decision making process with gender neutral announcements of positions, mentor programs to develop networks for the non-normative applicants, as of a non-European ethnicity. To compensate for a slow production rate, we suggest additional merits for scientific competence; commitment in education, institutional citizenship (administrative and organizational work at departmental/university level) and the third objective should be rewarded.\n\nTo increase a gender and ethnicity neutral peer review process, we suggest a standard peer review based external assessments of blinded project descriptions and standardized automatic evaluations of merits and bibliometric, based on a composite productivity score.\n\nTo conclude, we demonstrate a positive bias for European men to be selected for faculty positions 2014 at KI after peer review evaluations. The successful candidate was a Swedish man without family with a thesis defense four years earlier, a high h-index, and a vast network of men researchers at KI. With the purpose to nurture ground-breaking and innovative research, we suggest multiple evaluation measures of young researchers to promote equality and diversity in academia.\n\n\nData availability\n\nThe data used in this paper are based on public documents from Karolinska Institutet where the identity of the applicants have been kept anonymous in this paper and results presented in tables are based on group-level data only. In Sweden there is a law controlling all documents registered at a governmental agency, e.g., a university such as Karolinska Institutet, which says that they are open to the public (“Offentlighetsprincipen”). Hence, anyone can ask to get any document, such as applications for a position and instructions to reviewers, unless they are classified as secret. More information is available at http://ki.se/en/staff/official-documents-and-disclosure.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nGrants from Karolinska Institutet supported this work.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Karolinska Institutet for providing data to make this evaluation possible, Catharina Rehn and Agne Larsson at the University Library at Karolinska Institutet for bibliometric analyses, and the former board of equal treatment at Karolinska Institutet for support.\n\n\nSupplementary material\n\nSupplementary Figure 1: Distribution of assessed and granted Assistant Professorship positions at Karolinska Institutet 2011-2014 stratified by gender. In 2011 and 2012, the proportion of assessed applications for a faculty position at Karolinska Institutet were almost equal between men and women; however, this was not reflected in the number of granted applications where men had a much higher success rate. In 2014, the proportions of assessed and granted applications are equal across gender. However, the number of assessed applications were only those passing the first bibliometric criteria (a total journal impact factor of all publications > 75), hence, more women most likely did not pass the first cut and did not make it into the assessment group in the figure.\n\nClick here to access the data.\n\nSupplementary Table 1. Top-ranked Universities according to the QS World University Rankings®, 2014/1523.\n\nClick here to access the data.\n\nSupplementary Table 2. Research fields defined by the Swedish Research Council.\n\nClick here to access the data.\n\nSupplementary Table 3. Method used in project plan.\n\nClick here to access the data.\n\nSupplementary Table 4. Top-ranked journals according to the Journal Citation Reports® 201424.\n\nClick here to access the data.\n\nSupplementary Table 5. Bibliometry of departmental variables of the KI-affiliated researchers connected to the applicants.\n\nClick here to access the data.\n\n\nReferences\n\nGemzöe L: Peer review of scientific quality - a research overview. 2010; Swedish Research Council.\n\nWenneras C, Wold A: Nepotism and sexism in peer-review. Nature. 1997; 387(6631): 341–3. PubMed Abstract | Publisher Full Text\n\nMoss-Racusin CA, Dovidio JF, Brescoll VL, et al.: Science faculty's subtle gender biases favor male students. Proc Natl Acad Sci U S A. 2012; 109(41): 16474–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEIGE: Gender Equality Index 2015 - Measuring gender equality in the European Union 2005–2012: Report. Publications Office of the European Union. 2015. Publisher Full Text\n\nUKÄ: Higher education in Sweden - 2015 status report. Swedish Higher Education Authority. 2015. Reference Source\n\nHellmark Knutsson H: Kunskap i samverkan - för samhällets utmaningar och stärkt konkurrenskraft. H. education, Editor. Reference Source\n\nAAMC: Unconscious Bias Training for the Health Professions. [cited 2017 November 28]. Reference Source\n\nUCSF: Unconscious Bias Training. [cited 2017 November 28]. Reference Source\n\nKaatz A, Lee YG, Potvien A, et al.: Analysis of National Institutes of Health R01 Application Critiques, Impact, and Criteria Scores: Does the Sex of the Principal Investigator Make a Difference? Acad Med. 2016; 91(8): 1080–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchiffbänker H: It’s the elephant in the room! – (gender) bias in ERC grant selection. In Second Swiss National Science Foundation (SNSF) conference on Gender and Excellence: different perspectives in focus. Bern, Swizerland. 2016. Reference Source\n\nMartell RF, Lane DM, Emrich C: Male-Female Differences: A Computer Simulation. Am Psychol. 1996; 51(2): 157–158. Publisher Full Text\n\nManagement, T.S.A.f.P.: Research grants from an equality perspective. 2014.\n\nvan den Besselaar P, Sandstrom U: Gender differences in research performance and its impact on careers: a longitudinal case study. Scientometrics. 2016; 106(1): 143–162. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrapf M, Ursprung HW, Zimmermann C: Parenthood and Productivity of Highly Skilled Labor: Evidence from the Groves of Academe. In Working paper series. R.d. Federal reserve bank of St. Louis, Editor. St. Louis. 2014. Publisher Full Text\n\nFridner A, Norell A, Åkesson G, et al.: Possible reasons why female physicians publish fewer scientific articles than male physicians - a cross-sectional study. BMC Med Educ. 2015; 15: 67. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKandola B, Kandola J: The Invention of Difference - The story of gender bias at work. Pearn Kandola Publishing. 2013. Reference Source\n\nEliasson PO: SLU tar fram handlingsplan för jämställd rekrytering. In: Universitetsläraren. 2016. Reference Source\n\nLeslie SJ, Cimpian A, Meyer M, et al.: Expectations of brilliance underlie gender distributions across academic disciplines. Science. 2015; 347(6219): 262–5. PubMed Abstract | Publisher Full Text\n\nAlnebratt K, Jordansson B: Gender Equality, Meritocracy and Quality. In: Tidskrift för genusvetenskap. 2011. Reference Source\n\nWalton P: Athena SWAN award - something for Sweden? In: National equal treatment coference. Lund, Sweden. 2016.\n\nVR: Observations on gender equality in a selection of the Swedish research council´s evaluation-panels 2012. 2013. Reference Source\n\nBenenson JF, Markovits H, Wrangham R: Rank influences human sex differences in dyadic cooperation. Curr Biol. 2014; 24(5): R190–1. PubMed Abstract | Publisher Full Text\n\nQS World University Rankings® 2014/15. [cited 2017 November 28]. Reference Source\n\nJournal Citation Reports. [cited 2017 November 28]. Reference Source"
}
|
[
{
"id": "33112",
"date": "08 May 2018",
"name": "Inés Sánchez de Madariaga",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is well written, structured and argued. The methodology used is appropriate and well applied. Bibliography, notes, and references to the state of the art appropriate. The article addresses an important issue regarding gender bias in the evaluation of scientific research on which analysis of empirical evidence is still scarce. I fully recommend its indexing.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3652",
"date": "18 May 2018",
"name": "Sara Hägg",
"role": "Author Response",
"response": "We thank the reviewer for the comments."
}
]
},
{
"id": "34299",
"date": "11 Jun 2018",
"name": "Stephen A Gallo",
"expertise": [
"Reviewer Expertise peer review",
"research funding",
"decision making"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript pertains to a study in how applicants for faculty positions at Karolinska Institute (KI) are assessed. Applicants submit proposal information which is first passed through a Phase 1 cut-off whereby the applicants have to have published a requisite number of publications above a threshold impact factor to qualify for further review (roughly a third). Phase 2 is then an external review by a panel of 6 reviewers (3 male/ 3 female) outside of KI. The proposals were scored in two dimensions, once based on the merits of the applicants track records and training (Merits) and once on the quality of the research proposed (Project Plan). Scores from these two dimensions were added together to create the ranking. Demographic data was gleaned from applicant CVs, as was their publication list and any affiliated KI mentors/collaborators, etc. The bibliometric data from KI collaborators was gathered with the help of the University Library at KI. External reviewer Merit scores were plotted against the author’s productivity scores for different subsets of data (gender and ethnicity). Also Merit scores were separated into Quartiles (Q1-4), and proportionality across different variables was observed. Scores on Project plan were not analyzed much in this work, but were found to be well correlated with Merit scores for both men and women. Multivariate analysis was conducted as well as principal component analysis (PCA) to see if there is clustering.\n\nThe applicants find substantial differences in distribution across Merit Score Quartiles between male/female applicants, as a disproportionate # of men appeared in Q1. Gender proportions were more balanced in other quartiles. Regression analysis revealed a significant relationship between Merit score and productivity scores for men, but not for women or for non-europeans. Step wise regression revealed productivity scores, Project Plan scores and the presence of grants all had significant relationships to Merit scores for men, but only Project Plan scores were important for women. Applicants in Q1 also were more likely to be connected to male collaborators than women. PCA suggested having children was somewhat associated with lower Merit scores (Q4) and citation levels and 1st author publications were somewhat associated with Q1 Merit scores.\n\nThe goals of this research and the statistical analysis are straightforward, and there are some clear observations of not only disproportionate representation in grading but also review panels differentially emphasizing criteria across gender and ethnicity, specifically with bibliometric productivity and presence of grants. These results are disturbing as they suggest that biases are contributing to the observed disproportionate scoring. However, there are some issues that may need some clarification, consideration:\n\nFirstly, in Supplementary Fig 1, the proportion of women granted in 2011 and 2012 seem to be worse than 2014; as 2014 has a triage of sorts based on bibliometric productivity, does this mean that the current system (2014) is less biased than previous years? It seems as the proportions of women who applied vs granted for 2014 are pretty comparable, despite disproportionate representation in Q1 (Merit score). What are the reasons for this? Do Project scores compensate for biased Merit scores to push these applicants into the funding range? Looks like 38% of the total granted were women, which means about 4 out of the 10 granted were women. If only 2 women were granted from Q1 (merit score), but apparently 4 women were funded, 2 must have come from Q2-4, yet there were 12 other males in Q1. So either some males in Q1 merit score did not do well in their project scores, or the granting is not in strict order of rank? It would be interesting to know how the Project scores affected the ranking. Perhaps this could be addressed in the text.\n\nAlso, in 2014, because of the triage, the review panel only evaluates a subset of already excellent applicants (based on bibliometrics). But peer review is known to be poor at discriminating between highly qualified applicants1. This should probably be referenced and discussed in the text, as reviewer biases may be more prevalent in this situation.\n\nSecondly, in Fig 2, while male Merit scores correlated to productivity measurements, females scores did not. Yet, the authors mention that “The PCA analysis demonstrated an inverse correlation between having children and scores received on merits, as a family often slows down the production speed, resulting in fewer publications, shorter postdoc visits abroad and a higher academic age, resulting in less funding and more time spent on getting alternative funding, as commissioned research on short time contracts.” If female scores are not derived by the reviewers from their productivity, why would having children, and its effects on productivity, matter for reviewer’s scores? In fact, based on the regression, the authors state that “There were no significant effects of having children…on the score outcome.” So it’s a bit confusing what is happening here. Also, the authors mention women may be more affected by having children, “since women traditionally are more involved in family life.” Do the data show that having children and gender correlated in this sample?\n\nThirdly, it is clear there are differences in how reviewers evaluate applicants of different gender. The authors may mention work by Carole lee on commensuration bias in the text, which I believe predicts this kind of behaviour2. Out of curiosity, do the authors have any information about the reviewer discussions that could shed light to how the panel weighed criteria relative to applicant demographics? Also, some research has come out suggesting there is more variation across reviewers than across proposals3. Do the authors have any information on how individual reviewer scores varied? Were some panelists more biased than others? Did this vary at all by reviewer gender? This may be beyond the scope of this study, but it might be appropriate to mention that there may be different sources of the bias, at the panel level vs individual level.\n\nA few more minor points:\n\nFor the linear regressions, only p-values were reported in what the authors refer to as trend test. Could the authors include the correlation coefficient as well, as it seems there is a lot of spread in the data. Also, for Fig. 2c, the data for European men still have a good deal of variability that seems independent of actual productivity. Could the authors comment on potential sources for this variability?\n\nIn the text, it is said that “information on children was found in the CV or from time deducted from research due to parental leave;” is this information always reported on a CV? It was mentioned the authors imputed missing data; did this include data on children?\n\nCitations are time and field dependent; were they normalized for this productivity measurement for the applicants? If not, it may be difficult to compare. It seems, though, that citations were normalized for the KI collaborators/mentors. It’s unclear why different bibliometric approaches were used for applicants vs collaborators. Also, h-index is sensitive to age, was there an attempt to account for this confounder?\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3892",
"date": "14 Aug 2018",
"name": "Sara Hägg",
"role": "Author Response",
"response": "Responses to reviewer comments by Stephen A Gallo on Aug 9th 2018: Firstly, in Supplementary Fig 1, the proportion of women granted in 2011 and 2012 seem to be worse than 2014; as 2014 has a triage of sorts based on bibliometric productivity, does this mean that the current system (2014) is less biased than previous years? It seems as the proportions of women who applied vs granted for 2014 are pretty comparable, despite disproportionate representation in Q1 (Merit score). What are the reasons for this? Do Project scores compensate for biased Merit scores to push these applicants into the funding range? Looks like 38% of the total granted were women, which means about 4 out of the 10 granted were women. If only 2 women were granted from Q1 (merit score), but apparently 4 women were funded, 2 must have come from Q2-4, yet there were 12 other males in Q1. So either some males in Q1 merit score did not do well in their project scores, or the granting is not in strict order of rank? It would be interesting to know how the Project scores affected the ranking. Perhaps this could be addressed in the text. Authors reply: We thank the reviewer for raising this concern and giving us the opportunity to clarify. It is true that in the graph provided in Supplementary Figure 1, the proportion of granted women were lower in 2011 and 2012 comparted to 2014. However, roughly half of the applicants are usually women, but in 2014 – because of the triage system used this year – the figure only represents applicants who passed the first cut-off, hence less women appear in the graph in 2014 in the “Assessed” category. That said, the proportion of assessed and granted women applicants (36%) were equal in 2014 after the triage was taken into account, but should perhaps have been 50% to be completely fair given that this was probably the proportion of women applicants before the triage was applied.Nevertheless, since there were only 2 women in Q1 and 4 women who were granted the position, 2 women were taken from the Q2 category to be prioritized above men in Q1. This was done based on interviews of the candidates, the project plan had nothing to do with it. Most probably, the KI leadership decided to rank these two women higher in order to reach the same proportions in the assessed and granted categories of the applicants. Hence, KI are fully aware of the gender inequality situation and usually interviews 2 candidates for every position in order to have some freedom in whoever is chosen.Also, in 2014, because of the triage, the review panel only evaluates a subset of already excellent applicants (based on bibliometrics). But peer review is known to be poor at discriminating between highly qualified applicants1. This should probably be referenced and discussed in the text, as reviewer biases may be more prevalent in this situation.Authors reply: We thank the reviewer for this comment. It is an interesting notation that reviewer bias may be more prominent because of the selection procedure done on the applications. Yet another reason for not conducting a bibliometric cut-off. We have added a sentence about this in the new version of the manuscript.“The data were sub-selected from all the applications, because only one third of them were externally assessed when a triage system using a bibliometric cut-off was applied. Therefore, the reviewer bias observed may be more prominent as it has been shown that peer review is poor at discriminating between highly qualified applicants (van den Besselaar, 2015).“Secondly, in Fig 2, while male Merit scores correlated to productivity measurements, females scores did not. Yet, the authors mention that “The PCA analysis demonstrated an inverse correlation between having children and scores received on merits, as a family often slows down the production speed, resulting in fewer publications, shorter postdoc visits abroad and a higher academic age, resulting in less funding and more time spent on getting alternative funding, as commissioned research on short time contracts.” If female scores are not derived by the reviewers from their productivity, why would having children, and its effects on productivity, matter for reviewer’s scores? In fact, based on the regression, the authors state that “There were no significant effects of having children…on the score outcome.” So it’s a bit confusing what is happening here. Also, the authors mention women may be more affected by having children, “since women traditionally are more involved in family life.” Do the data show that having children and gender correlated in this sample?Authors reply: We agree with the reviewer on the fact that we were not able to show any clear associations between having children and productivity score in our data. However, we could see an inverse association – less applicants with children in Q1 compared to Q4 – although the trend was not statistically significant. In the discussion section we try to highlight what is known around this topic, and we do not actually claim that we have seen a clear relationship between gender and having children in our data. The family situation may have impact on mothers as well as fathers on their productivity. To clarify, we modified the text in the discussion slightly.“The PCA analysis demonstrated an inverse correlation between having children and scores received on merits, but we could not link this observation specifically to women in our analysis. “Thirdly, it is clear there are differences in how reviewers evaluate applicants of different gender. The authors may mention work by Carole lee on commensuration bias in the text, which I believe predicts this kind of behaviour2. Out of curiosity, do the authors have any information about the reviewer discussions that could shed light to how the panel weighed criteria relative to applicant demographics? Also, some research has come out suggesting there is more variation across reviewers than across proposals3. Do the authors have any information on how individual reviewer scores varied? Were some panelists more biased than others? Did this vary at all by reviewer gender? This may be beyond the scope of this study, but it might be appropriate to mention that there may be different sources of the bias, at the panel level vs individual level. Authors reply: We thank the reviewer for the additional suggestions. We have now added some text discussing commensuration bias and on variability in reviewer scores to the new version of the manuscript. Unfortunately, we do not have any other information regarding the differences in scoring that may have been observed between the different reviewers on the applications in our analysis. Moreover, there were no discussions, each reviewer submitted their scoring independently and the overall rating was done by the KI leadership. We have now added a sentence in the limitation section about this. “More studies emerge on this topic pointing at different flaws using peer review, both at individual reviewer level (commensuration bias [Lee, 2015]) and between different reviewers [Pier, 2018].”“Moreover, we did not have the possibility to explore differences in rating between different reviewers.”A few more minor points: For the linear regressions, only p-values were reported in what the authors refer to as trend test. Could the authors include the correlation coefficient as well, as it seems there is a lot of spread in the data. Also, for Fig. 2c, the data for European men still have a good deal of variability that seems independent of actual productivity. Could the authors comment on potential sources for this variability? Authors reply: We thank the reviewer for posing these suggestions and improving the manuscript. We have now added a new column to table 2 where pearson correlation coefficients have been added adjacent to the trend p-values. Regarding the second question, we agree on the fact that European men still have a great deal of variability explained by other factors than productivity. We ran a step wise regression in those 23 individuals with complete data and found the significant contributing factors to be: Grants as PI (P-value=0.0004) Grants as co-PI (P-value=0.0012) Scores received on project plan (P-value=0.021) Composite bibliometric score (P-value=0.0499) In the text, it is said that “information on children was found in the CV or from time deducted from research due to parental leave;” is this information always reported on a CV? It was mentioned the authors imputed missing data; did this include data on children?Authors reply: Information on children was not always present and we did not impute this variable. Hence, it is possible that there may be missing information regarding this variable that we cannot compensate for. Citations are time and field dependent; were they normalized for this productivity measurement for the applicants? If not, it may be difficult to compare. It seems, though, that citations were normalized for the KI collaborators/mentors. It’s unclear why different bibliometric approaches were used for applicants vs collaborators. Also, h-index is sensitive to age, was there an attempt to account for this confounder?Authors reply: We are aware of the fact that H-index is age sensitive and that citations may have been better used in normalized versions. However, these were the variables available in the CV and made available to the reviewers. Although it would have been interesting to investigate other bibliometric variables, this was not possible as we were restricted to use the variables provided by the applicants themselves. As for the KI collaborators, we could perform a deeper analysis using field normalized scores presented on group level with the help of the KI library, and only because these researchers were already KI affiliated."
}
]
}
] | 1
|
https://f1000research.com/articles/6-2145
|
https://f1000research.com/articles/7-1278/v1
|
14 Aug 18
|
{
"type": "Software Tool Article",
"title": "Visualizing balances of compositional data: A new alternative to balance dendrograms",
"authors": [
"Thomas P. Quinn"
],
"abstract": "Balances have become a cornerstone of compositional data analysis. However, conceptualizing balances is difficult, especially for high-dimensional data. Most often, investigators visualize balances with the balance dendrogram, but this technique is not necessarily intuitive and does not scale well for large data. This manuscript introduces the 'balance' package for the R programming language. This package visualizes balances of compositional data using an alternative to the balance dendrogram. This alternative contains the same information coded by the balance dendrogram, but projects data on a common scale that facilitates direct comparisons and accommodates high-dimensional data. By stripping the branches from the tree, 'balance' can cleanly visualize any subset of balances without disrupting the interpretation of the remaining balances. As an example, this package is applied to a publicly available meta-genomics data set measuring the relative abundance of 500 microbe taxa.",
"keywords": [
"compositional data",
"coda",
"balances",
"ilr",
"visualization",
"rstats",
"r"
],
"content": "Introduction\n\nA composition is a vector of positive measurements that sum to an arbitrary total1. Examples of compositions include measurements recorded in parts per million (ppm) or percentages, but also include measurements that are less obviously parts of the whole (e.g., count data generated by next-generation sequencing2). A component is one part of a composition. Compositional data analysis (CoDA) deals with the analysis of compositions. Compositional data, because they contain values bounded from zero to one, exist in a non-Euclidean space that render conventional statistical methods invalid. To deal with compositionality, CoDA typically begins with a log-ratio transformation that maps data into an unbounded space where conventional statistical methods can be used. The simplest transformations, the centered log-ratio transformation and the additive log-ratio transformation, use a simple reference as the denominator of the log-ratio. A more complex transformation, the isometric log-ratio transformation, transforms the composition with respect to an orthonormal basis3. Alternatively, one could analyze the log-ratio of each component to the other directly4,5.\n\nBalances use a sequential binary partition (SBP) to define an orthonormal basis that splits the composition into a series of non-overlapping groups6. This design allows for an interpretation of the data at the level of the isometric log-ratio coordinates7. This SBP contains a diverging set of contrasts that are each interpretable as a measure of “Group 1 vs. Group 2” (following an isometric log-ratio transformation). For a D-part composition, the SBP defines D − 1 balances that decompose the variance such the sum of the sample-wise variances for each balance in the tree equals the total sample-wise variance6. Balances (like the centered log-ratio transformation and the isometric log-ratio transformation) satisfy all properties required for compositional data analysis: scale invariance, permutation invariance, perturbation invariance, and sub-compositional dominance (reviewed in 8 and elsewhere).\n\nAlthough balances have proved useful for the analysis of compositional data, their usual application depends on generating a meaningful SBP. Sometimes, this involves manually creating an SBP based on expert opinion, with or without the assitance of exploratory analyses6. However, using expertise to build an SBP is not always desirable, especially for high-dimensional data (where each composition can measure thousands of components). Principal balance analysis is a data-driven alternative that, similar to principal component analysis, seeks to identify an SBP whose balances successively explain the maximal variance of a data set (a computationally expensive procedure approximated with heuristics)9,10. In the field of meta-genomics, where next-generation sequencing is used to count the relative abundance of microbe taxa, scientists have applied balances of SBPs to summarize and classify microbiome samples11. One study defined the SBP by hierarchically clustering the microbe taxa based on the outcome of interest12. Another defined the SBP based on the phylogenetic relationship between microorganisms13.\n\nOnce an SBP is generated, its balances can be visualized using a balance dendrogram14. The balance dendrogram illustrates (a) the distribution of samples across the balance, (b) the relationship between balances along the SBP tree, and (c) the decomposition of variance6,15. In addition, a balance dendrogram can show differences between sub-groupings of samples by coloring facets of the box plots. Although balance dendrograms capture a vast amount of data, the balance dendrogram may not provide the optimal visualization of balances. First, by building the figure around a tree, balance dendrograms place emphasis on the relationship between the balances, and not on the balances themselves. Second, each box plot has a unique scale positioned sporadically along the tree such that direct comparisons between one balance and all others become difficult. Third, the decomposition of variance uses lines that run parallel to the dendrogram branches, potentially confusing these concepts through use of a common symbol. In this software article, I present the R package balance for visualizing balances of compositional data. This package provides an alternative to the balance dendrogram that I hope will simplify balances for scientists less familiar with compositional data analysis.\n\n\nMethods\n\nWithin the R package universe, there are three standalone and well-documented tools for general compositional data analysis: compositions16, robCompositions17, and zCompositions18. The compositions::CoDaDendrogram function plots an archetypal balance dendrogram. There are also a number of domain-specific tools, tailored to next-generation sequencing data, and shown to work effectively19,20: ALDEx221,22 and ANCOM23 for differential abundance analysis, SparCC24 and SPIEC-EASI25 for the correlation analysis of sparse networks, propr26,27 for proportionality analysis, and philr13 for the analysis of phylogeny-based balances. Of these, the philr package computes balances and visualizes them with dendrograms, but does not plot a balance dendrogram per se.\n\nThe balance package is available for the R programming language and uses ggplot228 to visualize the distribution of samples across balances of a sequential binary partition (SBP) matrix. Each balance is calculated by the formula:\n\n\n\nfor bi = [b1, ..., bD−1] balances where g(x) is the geometric mean of x, ip is the sub-composition of positively-valanced components, and in is the sub-composition of negatively-valanced components. Here, |ip| describes the norm, or length, of the sub-composition.\n\nThe balance package29 computes and visualizes balances of compositional data. It requires few package dependencies, has negligible system requirements, and runs fast on a standard laptop computer (e.g., any modern budget CPU with 4GB RAM). To use balance, the user must provide a compositional data set (e.g., Table 1: samples as rows and components as columns) and a serial binary partition (SBP) matrix (e.g., Table 2: components as rows and balances as columns). Below, balance is shown for an example data set from robCompositions17.\n\n\n\nAs compositional data, the total expenditure for each individual is arbitrary. These example data are taken from robCompositions17.\n\nThese example data are taken from robCompositions17.\n\nOptionally, users can color components or samples based on user-defined groupings. To do this, users must provide a vector of group labels for each component via the d.group argument (or for each sample via the n.group argument). The boxplot.split argument facets the box plots similar to the balance dendrogram15.\n\n\n\nFigure 1 compares the balance dendrogram to its alternative using the robCompositions data17.\n\nOn the left, first branch of the balance dendrogram shows how the “services” and “other” components are contrasted against the remaining components. The box plot positioned at the branch shows the distribution of samples within this balance. The length of trunk shows the proportion of variance explained by this balance. On the right, this same information gets captured by a two-panel figure. The top balance in the left panel shows how the “services” and “other” components are contrasted against the remaining components. The top balance in the right panel shows the distribution of samples within this balance. In the right panel, the line length shows the range of the sample distribution, while its thickness shows the proportion of variance explained. Note that the median of this first contrast sits slightly positive, meaning that the most samples spend more on [“alcohol”, “foodstuff”, “housing”].\n\n\nUse cases\n\nAs a use case, a publicly available microbiome data set is analyzed using balances. These data measure the abundance of microbe taxa in the feces of diabetics and their non-diabetic relatives30, making it a true relative data set. Since these data contain many zeros that disrupt the log-ratio transformations, the zeros are first replaced through imputation by the zCompositions package. See the Supplementary Information for a demonstration of other pre-processing steps.\n\nTo identify balances for visualization, a serial binary partition (SBP) matrix is made by hierarchically clustering components based on their proportionality measure ϕs (used here as a dissimilarity measure27), thus joining together components that covary similarly across all samples. The ape31 and philr13 packages transform the tree object into an SBP ready for analysis and visualization.\n\n\n\nSupplementary Figure 1 visualizes all 499 balances and contains the same information that a balance dendrogram would contain: (a) the left panel dot plot shows the components being contrasted, (b) the right panel box plot shows the distribution of samples across each balance, and (c) the right panel line length shows the range of the balance (the range should cleanly approximate the decomposition of variance for purpose of exploratory visualization, though line width can optionally show the actual proportion of explained variance if desired). However, unlike a balance dendrogram, components and samples are projected on a common scale that facilitates direct comparisons and accommodates high-dimensional data. Yet, the main advantage of the balance package is that, by stripping the branches from the tree, it becomes possible to visualize any subset of balances without disrupting the interpretation of the remaining balances. In Figure 2, we subset the visualization to include only the top 10 most explanatory balances, ranked by the proportion of variance explained.\n\n\n\nThe left panel shows how select microbe taxa are contrasted against others. The right panel shows the corresponding distribution of samples within each balance, with the line length showing the range of the distribution. Many of the most explanatory balances occur toward the base of the serial binary partition (SBP) matrix. Yet, this subset visualization is not feasible with the balance dendrogram. Note that the order among the top 10 balances is determined procedurally to place the base of the tree at the top of the figure.\n\nThe d.group and n.group arguments offer a way to organize the results in a meaningful way. For example, the d.group can label microbes that most interest investigators, while the n.group can label patients based on clinical findings. Here, colored components (d.group) indicate the availability of supplemental meta-transcriptomic data, while colored samples (n.group) indicate the presence or absence of type-1 diabetes. In Figure 3, we repeat the visualization of the top 10 most explanatory balances, with points colored by the user-defined groupings.\n\nThe left panel shows how select microbe taxa are contrasted against others. The right panel shows the corresponding distribution of samples for each group within each balance, with the line length showing the total range of the distribution. There is apparently a difference in the median values of diabetics and non-diabetics for some balances. One could test the significance of these differences using conventional statistical methods like the Student’s t-test32. Note that the order among the top 10 balances is determined procedurally to place the base of the tree at the top of the figure.\n\n\nSummary\n\nCompositional data measure parts of a whole such that the total sum of the composition is irrelevant and each part is only interpretable relative to others. The analysis of composition data requires interpreting the parts of the composition relative to the others. Log-ratio transformations offer a way to transform the data into an unbounded space where the analyst can apply conventional statistical methods. One transformation is the isometric log-ratio transformation which transforms the composition with respect to an orthonormal basis. Balances use a sequential binary partition (SBP) to define an orthonormal basis that splits the composition into a series of non-overlapping groups. Balances can help the investigator identify trends in relative data, and are often visualized using a balance dendrogram. However, the balance dendrogram is not necessarily intuitive and does not scale well for large data. This paper introduces the balance package for the R programming language, a package for visualizing balances of compositional data using an alternative to the balance dendrogram. This alternative contains the same information coded by the balance dendrogram, but projects data on a common scale that facilitates direct comparisons and accommodates high-dimensional data. By stripping the branches from the tree, balance can cleanly visualize any subset of balances without disrupting the interpretation of the remaining balances.\n\n\nData availability\n\nAll data used for this analysis were acquired from the supplement of Heintz-Buschart et al.30. The supplement of this manuscript contains code to pre-process these data and reproduce the analysis.\n\n\nSoftware availability\n\nSoftware and source code available from: https://github.com/tpq/balance\n\nArchived source code at time of publication: https://doi.org/10.5281/zenodo.132686029\n\nSoftware license: GPL-2",
"appendix": "Author contributions\n\n\n\nT.P.Q. designed the project, implemented the package, and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nT.P.Q. thanks Sam Lee for his help rubber duck debugging.\n\n\nSupplementary material\n\nSupplementary Information. All data and scripts needed to reproduce the analysis.\n\nClick here to get the data.\n\nSupplementary Figure 1. Visualization of all 499 balances of the example microbial taxa data set. While this figure contains the same information as a balance dendrogram, it projects data on a common scale that facilitates direct comparisons and accommodates high-dimensional data.\n\nClick here to get the data.\n\n\nReferences\n\nAitchison J: The Statistical Analysis of Compositional Data. Chapman & Hall, Ltd., London, UK, UK, 1986. Reference Source\n\nQuinn TP, Erb I, Richardson MF, et al.: Understanding sequencing data as compositions: an outlook and review. Bioinformatics. 2018; 34(16): 2870–2878. PubMed Abstract | Publisher Full Text\n\nEgozcue JJ, Pawlowsky-Glahn V, Mateu-Figueras G, et al.: Isometric Logratio Transformations for Compositional Data Analysis. Math Geol. 2003; 35(3): 279–300. Publisher Full Text\n\nGreenacre M: Towards a pragmatic approach to compositional data analysis. Technical Report 1554, Department of Economics and Business, Universitat Pompeu Fabra, 2017. Reference Source\n\nErb I, Quinn T, Lovell D, et al.: Differential Proportionality - A Normalization-Free Approach To Differential Gene Expression. Proceedings of CoDaWork 2017, The 7th Compositional Data Analysis Workshop; available under bioRxiv, 2018; 134536. Publisher Full Text\n\nPawlowsky-Glahn V, Egozcue JJ: Exploring Compositional Data with the CoDa-Dendrogram. Austrian J Stat. 2011; 40(1&2): 103–113. Reference Source\n\nvan den Boogaart KG, Tolosana-Delgado R: Descriptive Analysis of Compositional Data. In Analyzing Compositional Data with R, Use R!, Springer, Berlin, Heidelberg, 2013; 73–93. Publisher Full Text\n\nvan den Boogaart KG, Tolosana-Delgado R: Fundamental Concepts of Compositional Data Analysis. In Analyzing Compositional Data with R, Use R!, Springer Berlin Heidelberg, 2013; 13–50. Publisher Full Text\n\nPawlowsky-Glahn V, Egozcue JJ, Tolosana Delgado R: Principal balances. Proceedings of CoDaWork 2011, The 4th Compositional Data Analysis Workshop, 2011; 1–10. Reference Source\n\nMartín-Fernández JA, Pawlowsky-Glahn V, Egozcue JJ, et al.: Advances in Principal Balances for Compositional Data. Math Geosci. 2018; 50(3): 273–298. Publisher Full Text\n\nRivera-Pinto J, Egozcue JJ, Pawlowsky-Glahn V, et al.: Balances: a New Perspective for Microbiome Analysis. mSystems. 2018; 3(4): pii: e00053-18. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorton JT, Sanders J, Quinn RA, et al.: Balance Trees Reveal Microbial Niche Differentiation. mSystems. 2017; 2(1): pii: e00162-16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSilverman JD, Washburne AD, Mukherjee S, et al.: A phylogenetic transform enhances analysis of compositional microbiota data. eLife. 2017; 6: pii: e21887. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEgozcue JJ, Pawlowsky-Glahn V: Groups of Parts and Their Balances in Compositional Data Analysis. Math Geol. 2005; 37(7): 795–828. Publisher Full Text\n\nThió-Henestrosa S, Egozcue JJ, Pawlowsky-Glahn V, et al.: Balance-dendrogram. A new routine of CoDaPack. Comput Geosci. 2008; 34(12): 1682–1696. Publisher Full Text\n\nvan den Boogaart KG, Tolosana-Delgado R: “compositions”: A unified R package to analyze compositional data. Comput Geosci. 2008; 34(4): 320–338. Publisher Full Text\n\nTempl M, Hron K, Filzmoser P: robCompositions: an R-package for robust statistical analysis of compositional data. John Wiley and Sons, 2011. Publisher Full Text\n\nPalarea Albaladejo J, Martín Fernández JA: zCompositions - R package for multivariate imputation of left-censored data under a compositional approach. Chemometr Intell Lab Syst. 2015; 143: 85–96. Publisher Full Text\n\nThorsen J, Brejnrod A, Mortensen M, et al.: Large-scale benchmarking reveals false discoveries and count transformation sensitivity in 16s rRNA gene amplicon data analysis methods used in microbiome studies. Microbiome. 2016; 4(1): 62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuinn TP, Crowley TM, Richardson MF: Benchmarking differential expression analysis tools for RNA-Seq: normalization-based vs. log-ratio transformation-based methods. BMC Bioinformatics. 2018; 19(1): 274. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFernandes AD, Macklaim JM, Linn TG, et al.: ANOVA-like differential expression (ALDEx) analysis for mixed population RNA-Seq. PLoS One. 2013; 8(7): e67019. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFernandes AD, Reid JN, Macklaim JM, et al.: Unifying the analysis of high-throughput sequencing datasets: characterizing RNA-seq, 16S rRNA gene sequencing and selective growth experiments by compositional data analysis. Microbiome. 2014; 2: 15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMandal S, Van Treuren W, White RA, et al.: Analysis of composition of microbiomes: a novel method for studying microbial composition. Microb Ecol Health Dis. 2015; 26: 27663. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFriedman J, Alm EJ: Inferring correlation networks from genomic survey data. PLoS Comput Biol. 2012; 8(9): e1002687. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKurtz ZD, Müller CL, Miraldi ER, et al.: Sparse and compositionally robust inference of microbial ecological networks. PLoS Comput Biol. 2015; 11(5): e1004226. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLovell D, Pawlowsky-Glahn V, Egozcue JJ, et al.: Proportionality: a valid alternative to correlation for relative data. PLoS Comput Biol. 2015; 11(3): e1004075. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuinn TP, Richardson MF, Lovell D, et al.: propr: An R-package for Identifying Proportionally Abundant Features Using Compositional Data Analysis. Sci Rep. 2017; 7(1): 16252. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWickham H: ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York, 2016. Reference Source\n\nQuinn T: tpq/balance: balance-0.0.8 (Version balance-0.0.8). Zenodo. 2018. http://www.doi.org/10.5281/zenodo.1326860\n\nHeintz-Buschart A, May P, Laczny CC, et al.: Integrated multi-omics of the human gut microbiome in a case study of familial type 1 diabetes. Nat Microbiol. 2016; 2(1): 16180. PubMed Abstract | Publisher Full Text\n\nParadis E, Claude J, Strimmer K: APE: Analyses of Phylogenetics and Evolution in R language. Bioinformatics. 2004; 20(2): 289–290. PubMed Abstract | Publisher Full Text\n\nStudent: THE PROBABLE ERROR OF A MEAN. Biometrika. 1908; 6(1): 1–25. Publisher Full Text"
}
|
[
{
"id": "37159",
"date": "28 Aug 2018",
"name": "Vera Pawlowsky-Glahn",
"expertise": [
"Reviewer Expertise Statistics - compositional data analysis"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWe have not been able to access the R-library “balance”. Consequently, we base our comments exclusively on the text presented. This circumstance motivates the answer “partly” to some of the previous questions. Thus, we cannot evaluate how well the software performs with particular datasets or how well it is documented.\n\nThe paper “Visualizing balances of compositional data” presents an R-package to visualize balances of compositional, high dimensional, data. The original visualization of balances was in the form of a dendrogram which represented in one figure the sequential binary partition, the mean of the corresponding balances, the variance corresponding to each balance, and a boxplot of each balance, if necessary separated by subgroup in several box-plots. A dendrogram is clearly difficult to visualize if a composition has several hundreds or even thousands of components. The tool described in the paper is thus a need in the field of compositional data analysis. The visualization strategy used can be summarized in a colloquial way by saying: “separate the dendrogram in two figures, one corresponding to the partition and grouping of parts, the other to the boxplots”. It is an interesting complement to the balance dendrogram, but we do not think it is an alternative. For high dimensional compositions it would be interesting to have three or more pictures, one corresponding to the partition, one to the box-plots, and the others to particular branches of the dendrogram, possibly summarizing groups of components by their common characteristic, if any. It would look like the three pictures depicted in Figure 1 of the paper.\n\nOne of the issues presented is the representation of the box-plots on a common scale, which is not completely new. It has been previously used at least in two papers, namely Lovell et al. (2013)1 and Pawlowsky et al. (2015)2. Nevertheless, the additional features of visualizing the proportion of explained variance by the thickness of the segment covering the range and by the inclusion of the data as dots can be helpful in understanding the role of each balance. At the same time, in certain circumstances, like with high dimensional data observed in a large sample, it will probably be still difficult to visualize the mentioned features. In such a case, it might be interesting to represent the first principal balances, something already considered in the paper.\n\nThe most useful features of compositional dendrograms are (a) the visualization of the decomposition of the total variance into contributions of each balance for one or more populations in the sample; (b) the comparison of mean values between populations; (c) identifying groups of parts as they participate in balances defined by the partition. The proposed software solves point (b) efficiently by comparing box-plots in a homogenized scale. Point (c) can be supplemented including a tool able to enumerate parts in the numerator and denominator of the balance. This is important in high dimensional compositions where the labels in the partition panel are not identifiable (example Figure 2, left panel). Point (a) is more deficiently covered. A partial solution is suggested in section 3) of the paper by colouring some segments in the partition panel according to the value of the variance. However, it seems useful to have a tool allowing alphanumerical output of ordered variances or cuts of the partition tree. For instance, if somebody is looking for linear associations of groups of taxa (Egozcue, Pawlowsky-Glahn and Gloor 20183) detecting balances which variance is smaller than a certain threshold, it is useful to visualize those balances by colour in the partition panel, but also to get a list of the parts involved in such balances.\n\nWe expect that the presented software, modified as suggested, becomes a useful tool for the analysis of high dimensional compositional data.\n\nMinor issues that need to be revised in the paper are the following:\nIntroduction\na) Nowadays, compositions are not defined as vectors of positive measurements that sum to a given total, arbitrary or not, but as representatives of equivalence classes in the positive orthant of D-dimensional real space (Barceló-Vidal et al. 20014; Barceló-Vidal and Martín-Fernández 20165).\nb) The usual representative of the equivalence classes is bounded between 0 and a given constant k. This defines a subset of real space which is not a subspace. Moreover, this set has a Euclidean space structure given by the operations that define the “Aitchison geometry” (Pawlowsky-Glahn and Egozcue, 20016). Thus, it is not adequate to say that compositional data exist in a non-Euclidean space.\nc) The description of the additive (alr) and centred (clr) log-ratio transformations as “simple” is misleading. The alr defines coordinates in an oblique basis in the above mentioned Aitchison geometry, while the clr leads to coordinates in a generating system and changes with subcompositions. Thus, results of clr components are not subcompositionally coherent. Therefore, interpretation of results is very difficult, as users tend to interpret results in terms of the component in the numerator only, not taking into account the role of the denominator. Furthermore, in many cases results obtained with the alr are not permutation invariant, something that needs to be checked for each method. One of the most striking cases is e.g. regression, where the equation itself is permutation invariant, but not so the goodness-of-fit criteria.\nd) The suggested alternative analysis in terms of simple log-ratios is also not simple at all, as they lead to the most general models, i.e. general log-contrast. The exponents involved in such a log-contrast are in general different for each part of the composition.\n\nMethods\nThe equation given for computing balances is not correctly described. The term |ip| does not describe the norm or length of the sub-composition, but the number of parts in the sub-composition.\n\nUse cases\na) The optional line width of the range of box-plots to illustrate the proportion of variance explained by a balance is not really informative in the case of high-dimensional data. In the low-dimensional case we think the dendrogram is more informative, as one can recognise easily if the balance that explains the largest proportion of variance corresponds to the first steps of the partition, involving thus a large number of parts or, on the contrary, involves only a small number of parts. This deficiency can be mitigated by colouring bars in the partition panel. For instance, plotting in red the lines corresponding to a given probability quantile of large variances and in blue for a probability quantile range of small variances.\nb) Figure 2 shows two limitations of the proposed visualisation. i) In the left panel it is clear which taxa are involved in each balance, but not which taxa are in the numerator, and which of those are in the denominator. Perhaps a good alternative would be to use different colours for each group, or to reorder the taxa in such a way that those in the numerator are always in the left hand side and a vertical bar indicates the dividing point.\nii) A numeration of the balances according to the larger (smaller) explained variance would be helpful in recognising rapidly which balance is the most (the less) informative in this sense.\n\nSummary a) It is not always true that “log-ratio transformations offer a way to transform the data into an unbounded space where the analyst can apply conventional statistical method”. For this to be true you need at least the transformation to be an isometry. For example, the alr is not an isometry, and thus conventional statistical methods should not be applied blindly.\nb) Balances is a particular case of isometric log-ratio transformation. Another example is given by general log-contrasts obtained as coordinates in compositional principal component analysis.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? No\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
},
{
"id": "37158",
"date": "03 Sep 2018",
"name": "Marc Noguera-Julian",
"expertise": [
"Reviewer Expertise Bioinformatics",
"Data Analysis",
"Metagenomics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nManuscript by Quinn entitled “Visualizing balances of compositional data” introduces a new way to visualize balances and component partitions in compositional data. Typically, these are visualized using dendrograms where branches represent the component partitions (obtained through an external method and/or expert knowledge). The height of the branching point in the y-axis represents the proportion of variance explained, while intersection on the branching point represents the mean of the balance. Structure of the dendrogram relates to the partition hierarchy. In addition, boxplots can be defined on top of the dendrogram, at each branching point, depicting the distribution of balance values. These dendrograms are useful when a bunch of variables are analyzed but are hardly interpretable when in a high-dimensional space, which is often the case in omics derived data.\n\nAt present, the interpretation of CoDA results in terms of clinical/biological meaning is one of the bottlenecks for the adoption of such theoretical frameworks in omics data-based clinical research. In this context, new ideas on CoDA visualization are welcome and useful.\n\nQuinn’s proposal splits the data that characterizes balances into two parts, one regarding the partition of the components into balances (left sub-plot) and the other to the distribution of the balances values when applied to the compositional data and the distribution of the variance of the same data (right sub-plot).This is similar to moving the boxplots aside from the dendrogram and turning the dendrogram into a simple sequential partition diagram. In addition, a grouping factor can be projected into the right sub-plot which facilitates to see the difference in one or more balances according to a response categorical variance, due to a common-scale axis for all balances. The representation of balances is now “free” from a dendrogram structure and this presents some advantages such as ranking or selecting over the balances but also some inconveniences such as the need to check which components are in the “num” and “den” for each of the balances, which is now a moving target.\n\nI think that this new presentation of balance data is useful in the low-dimensional setting. Unfortunately, the interpretation of these split diagrams appears still difficult in high-dimensional space as shown in sup info example and needs an inspection of each balance sub-space. While representing proportion of variance using line thickness is innovative, it is also difficult to compare between different balances.\n\nThe code to do this is based on ggplot2, available and easy to use and adapt to each user needs. Input is a compositional data frame and a binary partition, depending on the role of the component in the balance. Thus, the input must be generated outside of the present code, which only represents the data, which allows for flexibility, and the code is only focused on calculation and representation of pre-specified balances.\n\nI have some minor suggestions to improve the functionality of the code which may facilitate data interpretation:\n\nSelection: Author has added the possibility to plot multi-group boxplots. The author already mentions that this could be used for statistical testing, but it would be helpful to add an option select/highlight those balances that have statistically significant differences among groups and/or plot some statistical testing results on the right-hand-plot. Ranking: It looks like balances are ranked on the proportion of variance explained. However, in the provided code/examples, it is unclear whether the default ranking of the balances is the decreasing proportion of variance explained since this is not the case when the weigh.var option is set to TRUE. Also, it would also be useful to rank the balances according to their discrimination power over a response variable. With the standard balance dendrogram, when overlapping datasets, the variance for each of the subgroups could be represented (Pawlowsky-Glahn, Egozcue, Austrian Journal of Statistics,2011). I think this feature is lost in this representation.\n\nThe manuscript is well written and easy to follow by the expert reader. I’d like to highlight some minor points.\nThe author states that these diagrams can accommodate high dimensional data, but it does that by focusing on sub-groups of the high dimensional data, and in the code, these sub-groups are selected previous to the balance function. Therefore, the proposed code/diagram can really accommodate subsets of high dimensional data.\n\nSome details that caught my attention and that may be useful for future development:\n\nThe name of the main function is balance. It overlaps with compositions::balance and ape::balance. While this is not a hard problem it may add confusion to the function namespace in this kind of analysis. I’d consider it changing it at this stage. When group-based boxplotting, data points are scattered over both (or more) boxplots, it would be clearer if data points were scattered over their own group boxplot. In the Figure S1 example. Some balances show zero values and near-zero variances, probably due to zero over-inflation and the zero-dominance(downstream imputed) balances. This is kind of a white noise in the diagram. Correlation: While it is outside the scope of this work, for the sake of utility, it would be helpful, when there is a continuous response variable, to plot points within the scatter plot within a y sub-axis for each boxplot in such a way that correlation between each of the balances and the response variable is visible and also be able to highlight/select those balances.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1278
|
https://f1000research.com/articles/7-1241/v1
|
10 Aug 18
|
{
"type": "Research Article",
"title": "Audit of transvaginal sonography of normal postmenopausal ovaries by sonographers from the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS)",
"authors": [
"Will Stott",
"Aleksandra Gentry-Maharaj",
"Andy Ryan",
"Nazar Amso",
"Mourad Seif",
"Chris Jones",
"Ian Jacobs",
"Max Parmar",
"Usha Menon",
"Stuart Campbell",
"Matthew Burnell",
"Will Stott",
"Aleksandra Gentry-Maharaj",
"Andy Ryan",
"Nazar Amso",
"Mourad Seif",
"Chris Jones",
"Ian Jacobs",
"Max Parmar",
"Usha Menon",
"Matthew Burnell"
],
"abstract": "Background: We report on a unique audit of seven sonographers self-reporting high visualization rates of normal postmenopausal ovaries in the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). This audit was ordered by the trial’s Ultrasound Management Subcommittee after an initiative taken in 2008 to improve the quality of scanning and the subsequent increase in the number of sonographers claiming very high ovary visualisation rates. Methods: Seven sonographers reporting high rates (>89%) of visualizing normal postmenopausal ovaries in examinations performed between 1st January and 31st December 2008 were identified. Eight experts in gynaecological scanning reviewed a random selection of exams performed by these sonographers and assessed whether visualization of both ovaries could be confirmed (cVR-Both) in the examinations. A random effects bivariate probit model was fitted to analyse the results.\n\nResults: The eight experts reviewed images from 357 examinations performed on 349 postmenopausal women (mean age 60.0 years, range 50.2-73.3) by the seven sonographers. The mean cVR-Both obtained from the model for these sonographers was 67.2% with a range of 47.6-86.5% (95%CI 63.9-70.5%). The range of cVR-Both between the experts was 47.3-88.3% and the intra-class correlation coefficient (ICC) for left and right ovary confirmation was 0.39.\n\nConclusions: The audit suggests that self-reported visualization of postmenopausal ovaries is unreliable, as visualisation of both ovaries could not be confirmed in almost a third of examinations. The agreement for visualization of both ovaries based on review of a static image between experts and sonographers and between expert reviewers alone was only moderate. Further research is needed to develop reliable Quality Control metrics for transvaginal ultrasound.",
"keywords": [
"Ovarian Cancer Screening",
"Transvaginal Sonography Scans (TVS)",
"Ultrasound",
"Audit",
"Quality Control (QC)",
"Visualisation Rate (VR)"
],
"content": "Introduction\n\nThe normal ovary of a postmenopausal woman is a small structure (mean volume 1.25ml1) usually situated lateral to the uterine fundus and in close relation to the internal iliac vein. In as many as 40% of transvaginal ultrasound (TVS) examinations2 the ovary may not been seen as typically they shrink with age and are sometimes very difficult to locate3,4. For this reason in the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS) and other screening trials2,5,6 a pragmatic approach is taken whereby an annual screening examination may be judged satisfactory even if both ovaries are not seen, given that a good view has been achieved of the Iliac vessels in the pelvic side wall. However, the sonographer should always attempt to visualize both ovaries as this provides the maximum assurance that an early ovarian cancer has been excluded.\n\nA metric commonly used in the quality control (QC) of TVS is self-reported visualisation rate (VR), defined as the number of examinations in which the ovaries were visualized as a proportion of all examinations performed by the sonographer7. In 2008, UKCTOCS implemented an accreditation programme which included the monitoring of individual sonographer VR over a 3 month period8. This revealed that some sonographers were self-reporting higher than expected VR. Therefore in 2009, it was decided to audit the performance of these high scoring sonographers to confirm independently whether it is possible to achieve high rates of ovary visualisation in postmenopausal women. We report on this audit and its outcome.\n\n\nMethods\n\nThe TVS in this study were performed as part of the UKCTOCS, which is a multi-centre randomized controlled trial of 202,638 women volunteers from 13 trial centres throughout Northern Ireland, Wales and England (ISRCTN22488978). The inclusion criteria specified by the trial protocol were postmenopausal women aged between 50–74 years. The women were randomised into three groups with the ultrasound arm involving 50,639 women who underwent annual TVS examinations.\n\nSonographers performing the examinations were required to 1) record whether the ovary had been visualized, 2) measure the ovary in 3 orthogonal dimensions, and 3) comment on its morphology. These observations were stored centrally in the Trial Management System (TMS). The sonographer measured the dimensions of each ovary using digital callipers manually positioned on the extent of the ovary boundary in static images in two orthogonal planes during the examination; see Figure 1. The distance between the calliper marks was displayed in millimeters at the bottom of the image and copied into the TMS exam record fields as D1, D2 and D3. D1 represents the longest ovarian distance in longitudinal section (LS) and D2 is the widest distance (Anteroposterior - AP) which can be measured at 90° to the line used to measure D1. The largest diameter of the ovary in transverse section (TS) is measured as D3. These dimensions allow calculation of ovarian volume using the prolate ellipsoid formula; D1xD2xD3 x0.5423.\n\nThis ovary was confirmed as normal and correctly measured by the expert reviewer.\n\nThe TVS images used to measure the ovaries for each patient were saved on the ultrasound machines at each of the 13 trial centres and periodically copied onto disks which were sent by courier to the trial coordinating centre in London where they were copied into a bespoke computer system called the Ultrasound Record Archive (URA). These archived static images allow independent confirmation as to whether the feature measured was an ovary, thus permitting a subsequent audit of the sonographer’s self-reported VR.\n\nSonographers who had performed >100 TVS exams between January 2008 and January 2009 and who had reported a high rate of ovary visualisation (>89%) over this period were identified. The audit dataset was created by assigning a random number to the annual exams performed by each of the sonographers during this same period and then making a random selection for each sonographer based on the value of these numbers. Inclusion criteria were both ovaries reported as visualized and the examination classified as having normal morphology. Examinations were excluded if the corresponding images were not stored in the URA. All exams audited were performed using a Medison Accuvix (model XQ, software v1.08.02, transvaginal probe type EC4-9IS 4-9 MHz).\n\nEight members of the UKCTOCS Ultrasound Subcommittee who were highly experienced in gynaecological scanning undertook the review. They included three consultant gynaecologists, two gynaecological radiologists and three National Health Service (NHS) superintendent grade sonographers. Originally there were nine experts but it subsequently transpired that one of the reviewers was also one of the seven sonographers being audited. Therefore, it was decided to remove this reviewer’s results from the study. Accordingly, though these experts were initially split into three groups of three, one group was reduced to two experts following the exclusion of reviewer nine.\n\nThe audit dataset was randomly split such that each group reviewed 119 exams (total 357 exams) and each expert was asked to assess 17 exams performed by each of the seven sonographers. In this way, each exam was judged by at least two separate experts. In order to avoid bias each expert was blinded as to the name of the sonographer being reviewed and the assessment of the other experts.\n\nThe primary aim of the audit was to confirm the self-reported visualisation of both ovaries (cVR-Both) in examinations by each of the seven sonographers, which by extension required each expert reviewer to identify the exact images used to measure both ovaries from all of the images captured during the exam (mean 5.4, range 1–30). A software tool called osImageManager was developed specifically for the reviewers (Figure 2). It facilitated display of the images associated with each of the examinations and also recorded the review results in the audit database.\n\nThe baseline characteristics of the women are reported by trial centre code, age, years since last period, body mass index (BMI), hysterectomy status, oral contraceptive pill (OCP) and hormone replacement therapy (HRT) use. Information from the UKCTOCS sonographer accreditation records was used to calculate the mean, range and standard deviation of their collective experience. Their level of training and qualifications was also compared. Raw confirmed VR for each sonographer, each expert and overall were calculated for left ovary (LO) and right ovary (RO) as well as jointly for both LO and RO in the same examination. However, for formal inference we calculated the confirmed VR based on a statistical model.\n\nAll modelling was performed in Stata v14.2.\n\nModel description. The data was analysed using a bivariate probit random effects model. The bivariate outcome was the experts’ binary judgement of whether they confirmed the scan as seen or not seen, for both LO and RO. For the LO and RO portion of the model there was a scan-specific random intercept term representing the dependence of judgements within each scan, rated by three (or two) expert reviewers. The LO and RO random effects were allowed to covary as were the LO and RO error terms. In addition the model had categorical fixed effects for the original sonographer (n=7) and the expert (n=8). The details of the model can be found in Supplementary File 1. The model was fitted in Stata 14.2 with the user-written command cmp9. Two additional models were fitted. Firstly, one that included the factor ‘qualification’ (gynaecologist, radiologist, sonographer) instead of the factor ‘expert’ which, fully nested within ‘qualification’, meant both terms could not be included. Secondly, the factor ‘expert’ was simply taken out for reasons described in ‘Predictions and Correlations’.\n\nThe use of this statistical model allowed us to simultaneously analyse all the data despite some scans being judged by a different number of experts. This included instances when only the LO or RO of a scan had been reviewed. By making use of model-based predictions, the model allowed us to assess the impact of each sonographer (or reviewer) whilst generalizing over the sample of reviewer (or sonographer) and volunteers, separately for LO and RO, but also for both ovaries in a joint manner. The raw proportions, summed over either sonographer or reviewer, fail to take in the within-volunteer correlation. All joint significance tests of the parameters were Wald tests.\n\nPredictions and correlations. Stata’s post-estimation command margins were used to make predictions based on the probit model parameters. Specifically, marginal probability predictions were made over the whole sample, and for each sonographer and expert for both equations (LO and RO). In addition, the joint probability of a positive outcome for both LO and RO were calculated by incorporating the estimated correlation of both the random intercepts and error terms. All marginal predictions were ‘population-averaged’ in that they were integrated over the value range of the random effects. Individual random effects were calculated using empirical Bayes means. Separate intraclass correlation coefficients (ICC) for both LO and RO were calculated using the variance component estimates (see Supplementary File 1). The ICCs estimate the dependence between the dichotomous outcomes within the same volunteer, after taking into account the fixed effects. The ICC was also calculated based on a model with no ‘expert’ term, as its inclusion will provide an ICC that reflects within-scan correlation after adjusting for each expert’s general propensity to confirm visualisation. Supplementary File 1 also describes the calculation of the correlation between the left and right ovary result for a given volunteer on a given review occasion, necessary for the joint probability estimation. Note that the correlations from a probit model are ‘tetrachoric’ – that is, the correlation of two theorised normally distributed continuous latent variables, which produce the observed binary outcomes.\n\n\nResults\n\nAn audit dataset of 357 annual TVS exams from 349 women was produced by making a random selection of 51 exams performed by each of the seven UKCTOCS sonographers who had reported ovary visualisation rates >89% for the exams they had performed during the study period (1/1/08 to 31/12/08) irrespective of outcome; normal, abnormal or unsatisfactory. However, only examinations with normal morphology reported were reviewed. Fifteen reviews were ineligible for various reasons.\n\nThe eight expert reviewers performed the image review at locations in Derby, Manchester, Bristol and London. They collectively spent approximately 100 hours conducting their audit of the work of the seven UKCTOCS sonographers. The sonographers had a mean experience of 14.5 years (range 7–23, SD 7). They operated in five different trial centres with two pairs of sonographers working in the same centre. All sonographers were accredited by UKCTOCS during 2008.\n\nThe 349 women whose exams were included in the audit dataset had a mean age of 60.0 years (range 50.2–73.3, SD 5.85), mean age at last period of 49.3 years (range 27.9–70.0, SD 5.66), mean BMI of 26.2 (range 17.5–45.1, SD 4.17), use of HRT at recruitment of 24.9%, ever use of OCP of 64.7% and a history of hysterectomy in 12.4%.\n\nIn total the model fitted 1871 ultrasound scan assessments formed from 940 LO scans and 931 RO scans resulting in 945 scans where at least one ovary was included. The fixed effects of both sonographer and expert were highly significant for either left or right ovary (joint p<0.0001 always, Table 1). As expected, the fitted predictions for LO or RO separately were close to the raw proportions over the same sample (see Table 2) because the design was (largely) balanced and the predictions did not include an adjusting variable. The overall LO prediction was 0.78 (95% CI: 0.75-0.81), but by sonographer this ranged from 0.65 to 0.89. By reviewer, the range was from 0.59 to 0.93. For RO, predicted probabilities were typically higher; overall prediction was 0.80 (95% CI: 0.77-0.83), sonographer predictions ranged from 0.62 to 0.97 and reviewer predictions ranged from 0.66 to 0.94. Not all sonographer or reviewer rank orderings were the same for LO and RO, for example reviewer 7 was the lowest for LO and reviewer 5 for RO. This was in contrast to the raw proportions where reviewer 7 gave the lowest percentage of confirmations for both LO and RO. In a separate model where expert was replaced by ‘qualification’, sonographers had significantly higher confirmed VR for both LO (β=0.74 95% CI: 0.38-1.10) and RO (β=0.86 95% CI: 0.40-1.32) compared to gynaecologists (Table 1). Radiologists also had higher confirmed VR than gynaecologists but this was only significant at the 5% level for LO. The mean cVR-Both obtained using the model was 67.2%, ranging from 47.6% to 86.5% (95%CI: 63.9-70.5%, Table 2) and Figure 3 and Figure 4 present marginal joint predictions (cVR-Both) for individual experts and sonographers respectively.\n\n*from a different model that replaces 'expert' with 'specialism'\n\n** from a different model that excludes 'expert'\n\nThe variance estimates for the LO and RO random effects were 0.76 and 1.23 respectively (Table 1), but this did not differ statistically (p=0.210). Indeed, despite the observed differences, there was no statistical difference in the LO versus RO effects concerning sonographer (p=0.115), reviewer (p=0.754) or the model as whole (p=0.481). The correlation of the LO and RO ovary random effects was 0.30 (95% CI: 0.04-0.53) and the error term correlation was 0.47 (95% CI: 0.24-0.65), implying a correlation of 0.39 (95% CI: 0.26-0.51) for the paired outcome of LO and RO for a given volunteer and occasion. This compares to the tetrachoric correlation of raw data of 0.51, and to 0.37 when the fixed effects are included in a standard bivariate probit model. The resultant within-volunteer correlation (ICC) for the repeated outcomes within a volunteer were 0.43 (95% CI: 0.29-0.57) and 0.55 (95% CI: 0.42-0.68) for LO and RO respectively. In addition, the ICCs for a model excluding the mean effect of the ‘expert’ term, were lower at 0.40 (95% CI: 0.26-0.53) for LO and 0.51 (95% CI: 0.38-0.64) for RO.\n\n\nDiscussion\n\nOur audit suggests that sonographer’s self-reported visualization rates of postmenopausal ovaries they judged to have normal morphology is unreliable. Our study was facilitated by the unique TMS and URA systems employed in UKCTOCS which permitted a retrospective review of the images and measurements recorded by the sonographer. It could be argued that the static images used for this audit represent a snapshot of a continuous pelvis examination so might not truly represent what was seen by the sonographer. Nevertheless, these static images were used to measure the ovaries, so the structure marked by the callipers was definitely considered to be an ovary by the sonographer.\n\nWe analysed the data using a statistical model that accounted for the correlated structure of the data, between left and right ovary scans, and between the same scan viewed by the experts. Normality was assumed for the underlying latent variable (‘propensity to confirm visualisation’) and for the distribution of the ovary-specific volunteer random effects. The model gave predictions in the probability scale that different only slightly from the raw proportions, due to the nature of the study design. One clear benefit to using a statistical model with random effects is that all the data could be analysed together, and producing variance component estimates that allow the calculation of ICCs. The value of the ICC was higher for the right ovary then left, though not significantly different, and for both were modest: 0.40 for LO and 0.51 for RO when excluding the expert term from the fixed effects, the only variable that varied over each scan’s repeated assessments. Hence the ICC is a measure of inter-rater (expert) agreement, and suggests that although there is moderate concordance, the experts cannot be relied upon to replicate the judgements of each other. However, such lack of agreement in respect of each individual scan does not change of the overall conclusion of the audit in terms of the unreliability of the sonographer’s self-reported visualization rates.\n\nWe have previously reported on the Quality Control (QC) of UKCTOCS TVS scanning with similar exam selection criteria (ovaries were seen and normal)7. A single expert reviewed 1000 randomly chosen TVS examinations which had been performed by 96 sonographers. The expert’s cVR-Both was 50% compared to the 100% VR as self-reported by the sonographers for these examinations. This result is broadly consistent with the results reported in this study for the group of seven sonographers with mean cVR-Both of 67.2%. The significant variation in cVR-Both across sonographer of normal postmenopausal ovaries is probably due to differences in sonographer ability and the subjective nature of this examination; a supposition supported by findings reported by Sharma et al.8.\n\n\nLimitations of the study\n\nIntra-observer reproducibility was not addressed so the capability of individual experts to provide consistent results for the same exams was not measured. The study design was generally balanced, and potential confounders that might possibly affect visualization should be expected to be evenly distributed across experts due to the randomization process. However, it is conceivable that these confounders may not be balanced across sonographers, due to potential geographical differences in their distribution. This was not a major concern, but the factors could have been seamlessly absorbed into the model and produced sonographer predictions conditional on equal covariate distribution.\n\n\nConclusion\n\nThe results of this audit confirm that the visualization of postmenopausal normal ovaries by seven ‘high performing’ sonographers, as assessed by eight experts, could not be considered reliable given that in almost a third of their examinations structures other than an ovary had been mistakenly measured in at least one of the ovaries. However, individual sonographer performance varied significantly from 47% to 87% cVR-Both. These results show that it is possible for some sonographers to correctly visualize both ovaries when scanning a range of menopausal women so raising the possibly that other sonographers might achieve similar results if supported by a suitable quality improvement programme.\n\nThis audit highlights the problem of sonographers routinely mistaking other features like the bowel as ovaries when scanning postmenopausal women. It also highlights the difficulties of providing effective Quality Control (QC) for such scans in a large scale screening programme. Specifically, it shows that undertaking the type of expert review conducted by this study for a substantial number of sonographers on a regular basis would not be feasible without creating dedicated teams specializing in normal ovary identification from TVS images of postmenopausal women. Therefore there is a need for further research to explore how independent and reliable QC metrics for TVS might be obtained by other means, for example by the automated analysis of TVS scan images both static and video. Recent advances in machine learning research, particularly in the area of deep neural networks, suggest it might soon be viable to construct a system able to determine sonographer VR from a collection of images captured during a series of TVS examinations. Indeed, the use of such deep learning techniques in the gathering of quality metrics from obstetric ultrasound images is already reporting some promise10.\n\nThe work done by the UKCTOCS group on the QC of TVS scanning seeks to improve understanding of challenges associated with performing screening for ovarian cancer on a large scale and at multiple centres. All previous studies of ultrasound screening of postmenopausal ovaries for the early detection of cancer (excepting the recent QC study by our group) have accepted the self-reporting of ovarian visualisation rates as accurate. This is the first published audit of self-reporting of ovarian visualization rates and the results cause us to question the reliability of this metric, particularly for QC purposes.\n\n\nEthics approval\n\nThe UKCTOCS study was approved by North West Multicentre Research Ethics Committee 21/6/2000; MREC reference 00/8/34. It is registered as an International Standard Randomised Controlled Trial (no. ISRCTN22488978).\n\n\nData availability\n\nDataset 1: DataKey.txt – description of data fields; UKCTOCS TVC audit data biprobit format-0.csv; UKCTOCS TVC audit data biprobit format-0.dta; UKCTOCS TVC audit data do file.do. DOI, 10.5256/f1000research.15663.d21304811\n\nStata v14.2 was used in conjunction with the files in Dataset 1 to obtain the results presented in this paper.",
"appendix": "Competing interests\n\n\n\nUM has stock ownership and research funding from Abcodia. She has received grants from Medical Research Council (MRC), Cancer Research UK (CR UK), the National Institute for Health Research (NIHR), and The Eve Appeal (TEA). IJJ reports personal fees from and stock ownership in Abcodia as the non-executive director and consultant. He reports personal fees from Women’s Health Specialists as the director. He has a patent for the Risk of Ovarian Cancer algorithm and an institutional licence to Abcodia with royalty agreement. He is a trustee (2012–14) and Emeritus Trustee (2015 to present) for The Eve Appeal. He has received grants from the MRC, CR UK, NIHR, and TEA. The remaining authors declare no competing interests.\n\n\nGrant information\n\nThe UKCTOCS trial was core funded by the Medical Research Council, Cancer Research UK, and the Department of Health with additional support from the Eve Appeal, Special Trustees of Bart’s and the London, and Special Trustees of UCLH. The researchers at UCL were supported by the National Institute for Health Research University College London Hospitals Biomedical Research Centre.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are very grateful to the many volunteers throughout the UK who participated in the trial and the entire medical, nursing, administrative staff and Sonographers who work on the UKCTOCS. In particular, the UKCTOCS Centre leads : Keith Godfrey, Northern Gynaecological Oncology Centre, Queen Elizabeth Hospital, Gateshead; David Oram, Department of Gynaecological Oncology, St. Bartholomew’s Hospital, London, Jonathan Herod, Department of Gynaecology, Liverpool Women’s Hospital, Liverpool, Karin Williamson, Department of Gynaecological Oncology, Nottingham City Hospital Nottingham; Howard Jenkins, Department of Gynaecological Oncology, Royal Derby Hospital, Derby; Tim Mould, Department of Gynaecology, Royal Free Hospital; Robert Woolas, Department of Gynaecological Oncology, St. Mary’s Hospital, Portsmouth; John Murdoch Department of Gynaecological Oncology, St. Michael’s Hospital, Bristol; Stephen Dobbs Department of Gynaecological Oncology, Belfast City Hospital, Belfast; Simon Leeson Department of Gynaecological Oncology, Llandudno Hospital, North Wales; Derek Cruickshank, Department of Gynaecological Oncology, James Cook University Hospital, Middlesbrough. We also acknowledge the work of the following in helping the authors GF, NA, and SC in performing the expert review of static TVS images; A. Ferguson, G. Turner, C. Brunell, K. Ford, R. Rangar.\n\n\nSupplementary material\n\nSupplementary File 1: Description of the probit random effects model. Specification of the probit random effects model and details of methods used for calculating correlations and predictions as referenced in the Statistical Modelling section of the Methods part of the paper.\n\nClick here to access the data.\n\n\nReferences\n\nSherman ME, Lacey JV, Buys SS, et al.: Ovarian volume: determinants and associations with cancer among postmenopausal women. Cancer Epidemiol Biomarkers Prev. 2006; 15(8): 1550–1554. PubMed Abstract | Publisher Full Text\n\nBodelon C, Pfeiffer RM, Buys SS, et al.: Analysis of serial ovarian volume measurements and incidence of ovarian cancer: implications for pathogenesis. J Natl Cancer Inst. 2014; 106(10): pii: dju262. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSharma A, Burnell M, Gentry-Maharaj A, et al.: Factors affecting visualization of postmenopausal ovaries: descriptive study from the multicenter United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). Ultrasound Obstet Gynecol. 2013; 42(4): 472–77. PubMed Abstract | Publisher Full Text\n\nHall DA, McCarthy KA, Kopans DB: Sonographic visualization of the normal postmenopausal ovary. J Ultrasound Med. 1986; 5(1): 9–11. PubMed Abstract | Publisher Full Text\n\nvan Nagell JR Jr, Miller RW, DeSimone CP, et al.: Long-term survival of women with epithelial ovarian cancer detected by ultrasonographic screening. Obstet Gynecol. 2011; 118(6): 1212–21. PubMed Abstract | Publisher Full Text\n\nMenon U, Gentry-Maharaj A, Hallett R, et al.: Sensitivity and specificity of multimodal and ultrasound screening for ovarian cancer, and stage distribution of detected cancers: results of the prevalence screen of the UK Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). Lancet Oncol. 2009; 10(4): 327–340. PubMed Abstract | Publisher Full Text\n\nStott W, Campbell S, Franchini A, et al.: Sonographers' self-reported visualization of normal postmenopausal ovaries on transvaginal ultrasound is not reliable: results of expert review of archived images from UKCTOCS. Ultrasound Obstet Gynecol. 2018; 51(3): 401–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSharma A, Burnell M, Gentry-Maharaj A, et al.: Quality assurance and its impact on ovarian visualization rates in the multicenter United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). Ultrasound Obstet Gynecol. 2016; 47(2): 228–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoodman D: Estimating fully observed recursive mixed-process models with cmp. Stata J. 2011; 11(2): 159–206. Reference Source\n\nYaqub M, Kelly B, Papageorghiou AT, et al.: A Deep Learning Solution for Automatic Fetal Neurosonographic Diagnostic Plane Verification Using Clinical Standard Constraints. Ultrasound Med Biol. 2017; 43(12): 2925–2933. PubMed Abstract | Publisher Full Text\n\nStott W, Gentry-Maharaj A, Ryan A, et al.: Dataset 1 in: Audit of transvaginal sonography of normal postmenopausal ovaries by sonographers from the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15663.d213048"
}
|
[
{
"id": "37084",
"date": "28 Aug 2018",
"name": "John R. van Nagell",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper reports the audit results of seven sonographer self-reporting high visualization rates of normal postmenopausal ovaries in the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). Eight experts reviewed static images from 357 ultrasound examinations performed on 349 postmenopausal women (mean age 60.0 yrs) to assess whether visualization of both ovaries with normal morphology could be confirmed. A random effects bivariate probit model was fitted to analyze the results. Both normal ovaries could be visualized in only two-thirds of cases, in both the sonographer and expert groups, and there was variation between findings reported by sonographers and their expert reviewers. As a result, the authors conclude that self-reported visualization of normal ovaries by sonographers is unreliable. The authors further suggest that effective quality control in the interpretation of large numbers of ovarian ultrasound images is difficult, and may be enhanced by applying recent advances in machine-learning to this problem.\n\nThis audit confirms that the ultrasound evaluation of normal postmenopausal ovaries is often challenging because of their low volume. In fact, ovarian volumes continue to decrease after menopause, so ultrasound visualization may be even more difficult in women in the seventh and eighth decades of life1. As demonstrated, interpretation of ovarian ultrasound images is subjective, and may vary significantly between sonographers and expert physicians. For this reason, ultrasound scans, both of normal and abnormal ovaries, are reviewed by physicians for all women enrolled in the University of Kentucky Ovarian Screening Trial2. This review is time-consuming, however, and may be impractical as ovarian cancer screening is made available to large populations of at-risk women. The authors are to be commended for supporting efforts to adapt machine-learning technology to achieve automated analysis of static and video ultrasound images.\n\nA related and equally important issue is the ability of sonographers and expert reviewers to identify those morphologic patterns associated with early ovarian cancer. The evaluation algorithm used to identify those ultrasound patterns associated with ovarian cancer is important because it increases the positive predictive value of screening, and limits operative intervention for benign disease. We agree with the authors that ovarian cancer screening should be offered to women at moderate risk for ovarian cancer as a method to reduce disease mortality through early detection3, and applaud their efforts to standardize ultrasound interpretation.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "38131",
"date": "21 Sep 2018",
"name": "Mark E. Sherman",
"expertise": [
"Reviewer Expertise Epidemiology and molecular pathology of women’s cancers"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript assesses the retrospective findings of 8 expert reviewers in assessing ultrasounds reported by 7 ultrasonographers who recorded visualization of both normal postmenopausal ovaries in >89% of scans in the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). The study set included 357 examinations from 349 women and 1,871 randomly chosen scans reviewed by these ultrasonographers selected from a one-year period in the trial. The main aim was to assess whether the interpretation of bilateral visualization of normal ovaries is reliable, comparing original reports to retrospective expert review. Data were analyzed using a bivariate probit random effects model with outcomes of bilateral visualization versus not. Notably, normal postmenopausal ovaries may be quite small and the average age of women in UKCTOCS was about 60 years. The data show poor reliability across measures: bilateral visualization for original reviewers showed a mean of 67%, with a range of 47.6%-86.5%; expert results ranged from 47.3%-88.3%. Agreement among expert reviewers was also modest. The authors conclude that further research is needed to develop reliable quality control metrics for transvaginal ultrasound.\nIt is unclear how much expert reviewers knew about methods for assembling the study set and other details, which may have influenced interpretations. The study did not include a random sample of scans from the trial to serve as “distractors” or to provide a reference for comparison, and reliability data were not compared with external standards, such as measurements of ovaries that may have been removed later, CA 125 levels or rare cancer outcomes. External reviewers who were uninvolved with UKCTOCS may be of interest and possibly could be achieved via a web-based approach, at least for a subset.\nThis study in combination with prior reports from UKCTOCS (e.g. Stott et al and Sharma et al) provides a composite picture of the performance of ultrasound in the trial; however, the generalizability of the current study is difficult to assess, given the unusual method of scan selection and the engagement of reviewers who were intimately knowledgeable about the trial and perhaps aware of the design of this project. Irrespective of these concerns, the data from UKCTOCS suggest that ultrasound of normal ovaries among older women has limitations.\nGiven that reviewers were experienced and specifically trained for the task at hand, there are additional unknowns about whether and how performance could be improved, and how much of reviewers’ performances reflect inherent limitations of ultrasound for assessment of ovaries and ovarian cancer screening. Bodelon et al1 reported a high frequency of non-visualization of ovaries in the Prostate, Lung, Colorectal, and Ovarian (PLCO) screening trial, with a tendency for individual women to have repeated non-visualization. Further, although non-visualization is likely a marker of smaller ovarian size on average, it is notable that non-visualization conferred at best a marginally reduced risk of developing ovarian cancer in PLCO. Analysis of serial ovarian volumes in PLCO suggested that enlargement occurs rapidly within one to two years of cancer detection, and therefore, would be unlikely to have meaningful impact on clinical outcomes.\nIn a narrow sense, if ultrasound is to be used for ovarian cancer screening, then a better quality control metric than the frequency with which ovaries are visualized is needed. In a broader sense, this study and related literature call into question whether ultrasound imaging is useful in ovarian cancer screening, especially for high-grade serous carcinomas. To date, ovarian cancer screening and CA-125 has failed to achieve a reduction in ovarian cancer mortality. Although unproven, growing evidence points to the origin of many high-grade serous carcinomas, the most frequent lethal type of ovarian cancer, from the distal fallopian tube (fimbria), rather than from the ovarian surface epithelium. In contrast, other ovarian cancers (i.e. endometrioid and clear cell) may arise from endometriosis in the ovary and tend to remain organ confined for lengthier periods (present as stage I). Animal models of tubal cancer have shown that spread to the ovaries may accelerate disease progression (Perets et al2), but many questions about the pathogenesis of serous cancers among women, including the sojourn time of disease development and the role of the ovary in promoting metastatic spread. These fundamental questions raise larger issues about the role of assessing the ovary as part of cancer screening and the potential of ultrasound to identify cancers at early curable stage. Greater knowledge of the pathogenesis of ovarian cancer, especially high-grade serous cancers, may pose a barrier to improved early detection. Larger issues could be addressed to place the results of the study in context.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "4027",
"date": "01 Oct 2018",
"name": "Will Stott",
"role": "Author Response",
"response": "We thank Prof Sherman for his review, but seek clarification so that we might improve our paper. We note that he does not consider our conclusions are adequately supported by our results. Does he believe our results show that reports from an individual sonographer about the ovary visualisation of her own scans produce more reliable quality control metrics than the combined judgement of a team of eight experts when the reviewing the images the sonographer used to measure ovaries as taken from a random sample her scans over a year? If so, we should be grateful if he would identify the data in our results that supports such a conclusion."
}
]
},
{
"id": "38129",
"date": "03 Oct 2018",
"name": "Christine D. Berg",
"expertise": [
"Reviewer Expertise My area of expertise is clinical cancer screening trials. I was the US NCI lead for the Prostate",
"Lung",
"Colorectal and Ovarian Cancer Screening Trial and the National Lung Screening Trial."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe United Kingdom Collaborative Trial of Ovarian Cancer Screening was a high-quality randomized clinical trial evaluating trans-vaginal ultrasound screening (TVUS) compared to a multi-modal screening (MMS) with Risk of Ovarian Cancer Algorithm (ROCA) testing based on CA-125 measurements followed by ultrasound when indicated compared to no screening in 202,546 women of average risk of ovarian cancer. After 7 to 11 rounds of screening and up to 14 years of follow-up results trended toward benefit with ultrasound (HR = 0.91 [95% CI, 0.76 to 1.09]) and MMS (HR = 0.89 [95% CI, 0.74 to 1.08])1. A comprehensive quality assurance program for TVUS was undertaken2. This current paper is the result of an audit ordered by the trial’s Ultrasound Management Subcommittee of seven sonographers reporting rates of visualizing both ovaries of > 89% after an accreditation program done by UKCTOCS in 2008. Eight experts reviewed 357 archived, static images centrally stored that also had measurement markers recording ovary dimensions. The mean visualization rate upon review for both ovaries fell to 67.2% with a range of 47.6 % to 86.5%. The range between the experts was 47.3% to 88.3%. The trialists are to be commended for the design and conduct of this impressive ovarian cancer screening trial which shows some evidence of stage-shift and mortality reduction but after many rounds and long follow-up in a large group of women. The quality assurance plans in place to train and monitor TVUS results is impressive. While the technology dates to 2008-2009 and highly-selected experts might do better, the results of this audit strike a cautionary note about the rate of probably unavoidable non-visualization of ovaries with TVUS in post-menopausal women. They call for expanded efforts in automated analysis with machine learning. Improved technologies or biomarkers are also needed to realize the promise of lowering ovarian cancer mortality with early detection.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1241
|
https://f1000research.com/articles/7-993/v1
|
03 Jul 18
|
{
"type": "Research Article",
"title": "Analysis of a large food chemical database: chemical space, diversity, and complexity",
"authors": [
"J. Jesús Naveja",
"Mariel P. Rico-Hidalgo",
"José L. Medina-Franco",
"J. Jesús Naveja",
"Mariel P. Rico-Hidalgo"
],
"abstract": "Background: Food chemicals are a cornerstone in the food industry. However, its chemical diversity has been explored on a limited basis, for instance, previous analysis of food-related databases were done up to 2,200 molecules. The goal of this work was to quantify the chemical diversity of chemical compounds stored in FooDB, a database with nearly 24,000 food chemicals. Methods: The visual representation of the chemical space of FooDB was done with ChemMaps, a novel approach based on the concept of chemical satellites. The large food chemical database was profiled based on physicochemical properties, molecular complexity and scaffold content. The global diversity of FoodDB was characterized using Consensus Diversity Plots. Results: It was found that compounds in FooDB are very diverse in terms of properties and structure, with a large structural complexity. It was also found that one third of the food chemicals are acyclic molecules and ring-containing molecules are mostly monocyclic, with several scaffolds common to natural products in other databases. Conclusions: To the best of our knowledge, this is the first analysis of the chemical diversity and complexity of FooDB. This study represents a step further to the emerging field of “Food Informatics”. Future study should compare directly the chemical structures of the molecules in FooDB with other compound databases, for instance, drug-like databases and natural products collections.",
"keywords": [
"ChemMaps",
"chemical space",
"chemoinformatics",
"consensus diversity plots",
"diversity",
"FooDB",
"Foodinformatics",
"in silico"
],
"content": "Introduction\n\nDespite the high relevance of food chemicals in many areas including nutrition, disease prevention, and broad impact in the food industry, the chemical space and diversity of food chemical databases (Minkiewicz et al., 2016) has been quantified on a limited basis. Previous efforts include the analysis and comparison of about 2,200 generally regarded as safe (GRAS) flavoring substances (discrete chemical entities only) with compound databases relevant in drug discovery and natural product research e.g., drugs approved for clinical use, compounds in the ZINC database, and natural products from different sources (Burdock & Carabin, 2004; González-Medina et al., 2016; González-Medina et al., 2017; Martinez-Mayorga et al., 2013; Medina-Franco et al., 2012; Peña-Castillo et al., 2018). Other food-related chemical databases, comprising around 900 compounds, were analyzed by Ruddigkeit and J.-L. Reymond (Ruddigkeit & Reymond, 2014). The limited quantitative analysis of food chemicals has been in part due to the scarce availability of food chemical databases in the public domain. A major exception, however, is FooDB a large database with more than 20,000 food chemicals (The Metabolomics Innovation Centre, 2017). To date, it is the most informative public repository of food compounds.\n\nAs part of a continued effort to characterize the chemical contents and diversity of food chemicals (González-Medina et al., 2016; Martinez-Mayorga & Medina-Franco, 2009; Medina-Franco et al., 2012), herein we report a quantitative analysis of the chemical space and chemical diversity of FooDB. Widely characterized compound databases such as GRAS, approved drugs and screening compounds used in drug discovery projects were employed as references. We used well-established and novel (but validated) chemoinformatic methods to analyze compound collections. Although most of these approaches are commonly used in drug discovery, this and previous works show they can be readily applied for food chemicals (Peña-Castillo et al., 2018). Thereby this study represents a contribution to further advance the emerging field of Foodinformatics (Martinez-Mayorga & Medina-Franco, 2014).\n\n\nMethods\n\nFour chemical databases were homogeneously curated and analyzed, namely: FooDB version 1.0 (accessed November, 2017) (The Metabolomics Innovation Centre, 2017), drugs approved for clinical use available in DrugBank 5.0.2. (Law et al., 2014), GRAS (Burdock & Carabin, 2004), and a random subset of drug-like natural products from ZINC 12 (Irwin & Shoichet, 2005), of a size comparable to FooDB. Compounds from all databases were washed and prepared using Wash MOE 2017 node in KNIME version 3.5.3 (Berthold et al., 2008). Briefly, the washing protocol implemented in MOE included removing salts and neutralizing the charges in the molecules. The largest fragments were kept and duplicates in each dataset deleted. Table 1 summarizes the databases and sizes after data preprocessing.\n\na Number of compounds after data curation\n\nGRAS: generally regarded as safe\n\nThe visual representation was generated with ChemMaps, a novel method for large chemical space visualizations (Naveja & Medina-Franco, 2017). Briefly, ChemMaps is able to generate two- and three-dimensional representations of the chemical space based. It uses as input the pairwise chemical similarity computed using fingerprints data. This approach exploits the 'chemical satellites' concept (Oprea & Gottfries, 2001), i.e., molecules whose similarity to the rest of the molecules in the database yield sufficient information for generating a visualization of the chemical space. Further details of ChemMaps are described elsewhere (Naveja & Medina-Franco, 2017).\n\nSix physicochemical properties (PCP) were calculated with RDKit KNIME nodes version 3.4, namely: SlogP (partition coefficient), TPSA (topological polar surface area), AMW (atomic mass weight), RB (rotatable bonds), HBD (hydrogen bond donors) and HBA (hydrogen bond acceptors). For the analysis reported in this short communication, these properties were selected based on their broadly extended use for cross-comparison of compound databases of biological relevance. However, additional properties can be calculated.\n\nFraction of sp3 carbons and number of stereocenters were computed for FooDB as measures of structural complexity. Despite the fact that there are several other measures, these two are straightforward to interpret, easy to calculate and are becoming standard to make cross comparisons among databases (Méndez-Lucio & Medina-Franco, 2017). As described in the Results and Discussion section, the computed values for FooDB were compared to literature data already reported for the reference data sets.\n\nThe term “molecular scaffold” is employed to describe the core structure of a molecule (Brown & Jacoby, 2006). Different approaches have been proposed to consistently obtain a molecule’s scaffold in silico. In this work, scaffolds were generated under the Bemis-Murcko definition using the RDKit nodes available in KNIME (Bemis & Murcko, 1996). Bemis and Murcko define a scaffold as “the union of ring systems and linkers in a molecule”, i.e., all side chains of a molecule are removed.\n\nThe so-called “global diversity” (or total diversity) of FooDB was assessed and compared to other reference collections using a consensus diversity plot (González-Medina et al., 2016). As described recently, a consensus diversity plot simultaneously represents, in two-dimensions, four diversity criteria: structural (based on pairwise molecular fingerprint similarity values), scaffolds (using Murcko scaffolds computed as described in the Scaffold content section), physicochemical properties (based on the six properties described in Physicochemical properties section), and database size (the number of compounds) (González-Medina et al., 2016). The structural diversity of each data set is represented on the X-axis and was defined as the median Tanimoto coefficient of MACCS keys fingerprints. The scaffold diversity of each database is represented on the Y-axis and was defined as the area under the corresponding scaffold recovery curve, a well-established metric to measure scaffold diversity (Medina-Franco et al., 2009). The diversity based on PCP was defined as the Euclidean distance of six auto-scaled properties (SlogP, TPSA, AMW, RB, HBD, and HBA - vide supra) and is shown as the filling of the data points using a continuous color scale. The relative number of compounds in the data set is represented with a different size of the data points (smaller data sets are represented with smaller data points).\n\n\nResults and discussion\n\nChemical space of FooDB in comparison with the compounds of the three reference databases is visualized in Figure 1. The figure also shows the individual comparisons of FooDB with GRAS, DrugBank and natural products subset from ZINC, respectively. As shown in Figure 1a, the coverage of chemical space of FoodDB is quite large as compared to other datasets. Most GRAS compounds lie within the chemical space framed by FooDB (Figure 1b): indeed, 1,193 compounds (53% of GRAS) are structurally identical between the two databases. Hence, FooDB largely contains and upgrades structural information from GRAS. There is significant overlap with approved drugs (Figure 1c) and natural products from ZINC with FooDB (Figure 1d).\n\nThe visual representation was generated with ChemMaps (Naveja & Medina-Franco, 2017). a) Comparison of FooDB with three reference collections. Panels b–d) show comparisons of FooDB with individual data sets.\n\nFigure 2 shows the boxplots for the distribution of PCP in all the four databases. For better visualization, the outliers above or below the median +/- 1.5 interquartile range are omitted. As expected, due to the large structural diversity, distribution of PCP in FooDB is broad, in many cases overcoming even approved drugs. For most properties, except RB, several compounds in FooDB share the properties of drugs, and drug-like natural products in ZINC. In turn, GRAS consists mostly of small-sized compounds. Table S1 (Supplementary File 1) summarizes the statistics for FooDB and other reference collections.\n\nBox plots of the distribution of six physicochemical properties of FooDB and reference data sets. SlogP (partition coefficient), TPSA (topological polar surface area), AMW (atomic mass weight), RB (rotatable bonds), HBD (hydrogen bond donors) and HBA (hydrogen bond acceptors).\n\nFor FooDB, the fraction of sp3 carbons (mean: 0.62; standard deviation: 0.28) and the number of stereocenters (mean: 4.7; standard deviation: 7.1) indicated a high structural complexity. For comparison, it has reported that the mean of the fraction of sp3 carbons for approved drugs, compounds in the clinic and a general screening collections of organic compounds is 0.47, 0.41 and 0.32, respectively (González-Medina et al., 2016; Lovering et al., 2009). Moreover, the reported mean of the fraction of sp3 carbons for natural products collections ranges between 0.41 and 0.58 (for natural products in ZINC and Traditional Chinese Medicine (López-Vallejo et al., 2012). The complexity of compounds in FooDB is comparable to molecules in GRAS (mean: 0.63; standard deviation: 0.28) (González-Medina et al., 2016).\n\nFigure 3 shows the frequency of the most common scaffolds in FooDB. Many compounds are acyclic (32%), followed by monocyclic compounds with a benzene (6%), cyclohexene (2%) and tetrahydropyran (1%) as a core structure. The benzene ring is the most common core scaffold in chemical databases used in drug discovery (Bemis & Murcko, 1996; Singh et al., 2009; Yongye et al., 2012). Many of the most frequent scaffolds in FooDB are also common in other compound databases of natural products (González-Medina et al., 2017).\n\nRecently, Schneider et al. published an analysis on the selectivity of Bemis-Murcko scaffolds based on public bioactivity data available in ChEMBL (Schneider & Schneider, 2017). 78 of the 585 scaffolds reported therein were present in FooDB. The list of the 78 matching scaffolds, along with the original statistics calculated by Schneider et al., is made available as Dataset 1 (Naveja et al., 2018a). Of note, the three most frequent scaffolds in FooDB (benzene, cyclohexane and tetrahydropyran, with more than 300 compounds - Figure 3) are matching scaffolds. Interestingly, the mean Information content (I) value of all 585 Schneider’s scaffolds is 2.8 (sd= 0.6), while the subset of the 78 scaffolds also present in FooDB has a mean I value of only 2.1 (sd = 0.7). Lower I values point towards more promiscuous scaffolds (Schneider & Schneider, 2017), an expected finding given the nature of the database. As example, Table S2 (Supplementary File 1) shows and discusses briefly the statistics for the three most frequent matching scaffolds.\n\nPolyphenols. Since polyphenols are an important class of compounds in food chemistry (Rasouli et al., 2017), we investigated and quantified the amount of polyphenols in FooDB. Polyphenols are well-known antioxidants, which may play a role in the prevention of several diseases including type 2 diabetes, cardiovascular diseases, and some types of cancer (Neveu et al., 2010). In this line, it is known that oxidative/nitrosative stress has a pivotal role in pathophysiology of neurodegenerative disorders and other kinds of disease (Ebrahimi & Schluesener, 2012). Polyphenols have been demonstrated to elicit several biological effects in in vitro and ex vivo tests (Del Rio et al., 2010; Scalbert et al., 2005).\n\nThe molecular structure of polyphenols includes at least two phenolic groups, or one biphenol, and up to any additional number of OH substitutions in aryl rings. They may be classified by their structure in two big groups: flavonoids and non-flavonoids (phenolic acid derivatives) (Del Rio et al., 2013). Some polyphenols, such as quercetin, are found in all plant products, whereas others are specific to particular foods. In many cases, food contain complex mixtures of polyphenols, which are often poorly characterized (Manach et al., 2004).\n\nPolyphenols are also a common chemical motif among natural products, and they are often associated to promiscuity (Tang, 2016). In this work it was found that 3,228 (13.5%) compounds in FoodDB are polyphenolic. The list of all 3,228 polyphenolic compounds is made available as Dataset 2 (Naveja et al., 2018b). This set of polyphenols is larger than the 502 polyphenols from food indexed in Phenol-Explorer (Neveu et al., 2010). For comparison, all the reference databases used in this work contained less polyphenols than FooDB. GRAS, ZINC and DrugBank contained 15 (0.6%), 24 (0.1%) and 325 (3.7%) polyphenols, respectively.\n\nSince the diversity of compound data sets depend on the molecular representation (Sheridan & Kearsley, 2002), a global assessment of the diversity of FooDB was analyzed using different criteria: molecular fingerprints, scaffolds, physicochemical properties and number of compounds. The four criteria were analyzed in an integrated manner through a Consensus Diversity Plot generated as described in the Global diversity section of the Methods. The Consensus Diversity Plot in Figure 4 shows that FooDB has about average diversity both by fingerprints and relatively low diversity by scaffolds. Although PCP (represented with the color of the data points) are extremely diverse, structural motifs seem to reappear with slight variations. Figure 4 shows the overall large fingerprint and scaffold diversity of approved drugs (e.g., data points towards the lower left region of the plot). Similarly, the relative global diversity of GRAS i.e., high fingerprint diversity but low scaffold diversity (e.g., upper left region of the plot), is consistent with previous comparisons of these compounds with other reference data sets (González-Medina et al., 2016; Medina-Franco et al., 2012).\n\nThe structural diversity of each data set is represented on the X-axis and was defined as the median Tanimoto coefficient of MACCS keys fingerprints. The scaffold diversity of each database is represented on the Y-axis and was defined as the area under the corresponding scaffold recovery curve. The diversity based on physicochemical properties (PCP) was defined as the Euclidean distance of six auto-scaled properties (SlogP, TPSA, AMW, RB, HBD, and HBA) and is shown as the filling of the data points using a continuous color scale. The relative number of compounds is represented with a different size of the data points (smaller data sets are represented with smaller data points).\n\n\nConclusions\n\nFooDB is a novel, large and diverse library containing information of more than 23,000 compounds found in food. To date, it is the most informative public resource of food compounds. Visual representation of the chemical space revealed that FooDB largely contains and upgrades structural information from GRAS. Indeed, most of GRAS is contained in FooDB. Compounds in FoodDB have a large diversity of physicochemical properties. The distributions of most physicochemical properties of FoodDB compounds overlap with those of approved drugs and natural products in ZINC. GRAS mostly contains small-sized compounds. The global diversity indicates that FooDB has a large structural diversity as measured by molecular fingerprints, though it has relatively low scaffold diversity. One third of the compounds in FoodDB are acyclic. The most frequent cyclic scaffolds are monocyclic. Of note, polyphenols represent a large fraction of FoodDB. Analysis of the chemical complexity revealed that compounds in FooDB are more complex than approved drugs and natural products and have complexity comparable to GRAS compounds. A next step of this work is to compare the chemical space of FooDB with that of natural products from different sources, e.g., plants, terrestrial, cyanobacteria. A second suggested future study is to perform the virtual screening of FooDB across a range of targets, for instance, the increasingly important epigenetic targets (Naveja & Medina-Franco, 2018). The goal of such study would be to identify systematically dietary components that may be participating in epigenetic regulatory processes (Martinez-Mayorga et al., 2013). These efforts are ongoing in our group and will be reported in due course.\n\n\nData availability\n\nDataset 1: (Schneidermatch.sdf). This file contains the list of the 78 matching scaffolds in SDF format, along with the original statistics calculated by Schneider et al. No special software is required to open the SDF files. Any commercial or free software capable of reading SDF files will open the data sets supplied. 10.5256/f1000research.15440.d209071 (Naveja, et al., 2018a)\n\nDataset 2: (FooDBpolyphenols.sdf). This file contains 3,228 polyphenolic compounds available in FooDB, in SDF format. No special software is required to open the SDF files. Any commercial or free software capable of reading SDF files will open the data sets supplied. 10.5256/f1000research.15440.d209072 (Naveja et al., 2018b)",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by a Consejo Nacional de Tecnología (CONACyT) scholarship [622969] (JJN). Programa de Apoyo a Proyectos de Investigación e Innovación Tecnológica (PAPIIT) Grant [IA203018] from the Universidad Nacional Autónoma de México (JLMF).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank Karina Martínez-Mayorga, Andrea Peña-Castillo and Nicole Trujillo for rich discussions and valuable insights.\n\n\nSupplementary material\n\nSupplementary File 1: File with supporting tables. Table S1: Summary statistics of the distribution of six PCP of FooDB and other reference collections. Table S2: Selected scaffold statistics as reported by (Schneider & Schneider, 2017).\n\nClick here to access the data.\n\n\nReferences\n\nBemis GW, Murcko MA: The properties of known drugs. 1. Molecular frameworks. J Med Chem. 1996; 39(15): 2887–93. PubMed Abstract | Publisher Full Text\n\nBerthold MR, Cebron N, Dill F, et al.: KNIME: The Konstanz Information Miner. In: Preisach C, Burkhardt H, Schmidt-Thieme L, Decker R, (Eds.), Data Analysis, Machine Learning and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg. 2008; 319–326. Publisher Full Text\n\nBrown N, Jacoby E: On scaffolds and hopping in medicinal chemistry. Mini Rev Med Chem. 2006; 6(11): 1217–29. PubMed Abstract | Publisher Full Text\n\nBurdock GA, Carabin IG: Generally recognized as safe (GRAS): history and description. Toxicol Lett. 2004; 150(1): 3–18. PubMed Abstract | Publisher Full Text\n\nDel Rio D, Costa LG, Lean ME, et al.: Polyphenols and health: what compounds are involved? Nutr Metab Cardiovasc Dis. 2010; 20(1): 1–6. PubMed Abstract | Publisher Full Text\n\nDel Rio D, Rodriguez-Mateos A, Spencer JP, et al.: Dietary (poly)phenolics in human health: structures, bioavailability, and evidence of protective effects against chronic diseases. Antioxid Redox Signal. 2013; 18(14): 1818–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEbrahimi A, Schluesener H: Natural polyphenols against neurodegenerative disorders: potentials and pitfalls. Ageing Res Rev. 2012; 11(2): 329–45. PubMed Abstract | Publisher Full Text\n\nGonzález-Medina M, Owen JR, El-Elimat T, et al.: Scaffold Diversity of Fungal Metabolites. Front Pharmacol. 2017; 8: 180. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGonzález-Medina M, Prieto-Martínez FD, Naveja JJ, et al.: Chemoinformatic expedition of the chemical space of fungal products. Future Med Chem. 2016; 8(12): 1399–412. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGonzález-Medina M, Prieto-Martínez FD, Owen JR, et al.: Consensus Diversity Plots: a global diversity analysis of chemical libraries. J Cheminform. 2016; 8: 63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIrwin JJ, Shoichet BK: ZINC--a free database of commercially available compounds for virtual screening. J Chem Inf Model. 2005; 45(1): 177–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaw V, Knox C, Djoumbou Y, et al.: DrugBank 4.0: shedding new light on drug metabolism. Nucleic Acids Res. 2014; 42(Database issue): D1091–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLópez-Vallejo F, Giulianotti MA, Houghten RA, et al.: Expanding the medicinally relevant chemical space with compound libraries. Drug Discov Today. 2012; 17(13–14): 718–26. PubMed Abstract | Publisher Full Text\n\nLovering F, Bikker J, Humblet C: Escape from flatland: increasing saturation as an approach to improving clinical success. J Med Chem. 2009; 52(21): 6752–6. PubMed Abstract | Publisher Full Text\n\nManach C, Scalbert A, Morand C, et al.: Polyphenols: food sources and bioavailability. Am J Clin Nutr. 2004; 79(5): 727–47. PubMed Abstract | Publisher Full Text\n\nMartinez-Mayorga K, Medina-Franco JL: Chemoinformatics-applications in food chemistry. Adv Food Nutr Res. 2009; 58: 33–56. PubMed Abstract | Publisher Full Text\n\nMartinez-Mayorga K, Medina-Franco JL: Foodinformatics: Applications of chemical information to food chemistry. Springer. 2014; Publisher Full Text\n\nMartinez-Mayorga K, Peppard TL, López-Vallejo F, et al.: Systematic mining of generally recognized as safe (GRAS) flavor chemicals for bioactive compounds. J Agric Food Chem. 2013; 61(31): 7507–14. PubMed Abstract | Publisher Full Text\n\nMedina-Franco JL, Martínez-Mayorga K, Bender A, et al.: Scaffold diversity analysis of compound data sets using an entropy-based measure. QSAR Comb Sci. 2009; 28(11–12): 1551–1560. Publisher Full Text\n\nMedina-Franco JL, Martínez-Mayorga K, Peppard TL, et al.: Chemoinformatic analysis of GRAS (Generally Recognized as Safe) flavor chemicals and natural products. PLoS One. 2012; 7(11): e50798. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMéndez-Lucio O, Medina-Franco JL: The many roles of molecular complexity in drug discovery. Drug Discov Today. 2017; 22(1): 120–126. PubMed Abstract | Publisher Full Text\n\nMinkiewicz P, Darewicz M, Iwaniak A, et al.: Internet databases of the properties, enzymatic reactions, and metabolism of small molecules-search options and applications in food science. Int J Mol Sci. 2016; 17(12): pii: E2039. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNaveja JJ, Medina-Franco JL: ChemMaps: Towards an approach for visualizing the chemical space based on adaptive satellite compounds [version 2; referees: 3 approved with reservations]. F1000Res. 2017; 6: pii: Chem Inf Sci-1134. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNaveja JJ, Medina-Franco JL: Insights from pharmacological similarity of epigenetic targets in epipolypharmacology. Drug Discov Today. 2018; 23(1): 141–150. PubMed Abstract | Publisher Full Text\n\nNaveja JJ, Rico-Hidalgo MP, Medina-Franco JL: Dataset 1 in: Analysis of a large food chemical database: chemical space, diversity, and complexity. F1000Research. 2018a. Data Source\n\nNaveja JJ, Rico-Hidalgo MP, Medina-Franco JL: Dataset 2 in : Analysis of a large food chemical database: chemical space, diversity, and complexity. F1000Research. 2018b. Data Source\n\nNeveu V, Perez-Jiménez J, Vos F, et al.: Phenol-Explorer: an online comprehensive database on polyphenol contents in foods. Database (Oxford). 2010; 2010: bap024. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOprea TI, Gottfries J: Chemography: the art of navigating in chemical space. J Comb Chem. 2001; 3(2): 157–166. PubMed Abstract | Publisher Full Text\n\nPeña-Castillo A, Méndez-Lucio O, Owen JR, et al.: Chemoinformatics in Food Science. In J. Gasteiger & T. Engel (Eds.), Chemoinformatics - Volume 2: From Methods to Applications. Weinheim, Germany: Wiley-VCH. 2018. Publisher Full Text\n\nRasouli H, Farzei MH, Khodarahmi R: Polyphenols and their benefits: A review. Int J Food Prop. 2017; 20(sup2): 1700–1741. Publisher Full Text\n\nRuddigkeit L, Reymond JL: The chemical space of flavours. In K. Martinez-Mayorga & J. L. Medina-Franco (Eds.), Foodinformatics. Cham: Springer International Publishing. 2014; 83–96. Publisher Full Text\n\nScalbert A, Johnson IT, Saltmarsh M: Polyphenols: antioxidants and beyond. Am J Clin Nutr. 2005; 81(1 Suppl): 215S–217S. PubMed Abstract | Publisher Full Text\n\nSchneider P, Schneider G: Privileged Structures Revisited. Angew Chem Int Ed Engl. 2017; 56(27): 7971–7974. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSheridan RP, Kearsley SK: Why do we need so many chemical similarity search methods? Drug Discov Today. 2002; 7(17): 903–911. PubMed Abstract | Publisher Full Text\n\nSingh N, Guha R, Giulianotti MA, et al.: Chemoinformatic analysis of combinatorial libraries, drugs, natural products, and molecular libraries small molecule repository. J Chem Inf Model. 2009; 49(4): 1010–1024. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTang GY: Why Polyphenols have Promiscuous Actions? An Investigation by Chemical Bioinformatics. Nat Prod Commun. 2016; 11(5): 655–656. PubMed Abstract\n\nThe Metabolomics Innovation Centre: FooDB (Version 1). Computer software, Canada: The Metabolomics Innovation Centre. 2017. Reference Source\n\nYongye AB, Waddell J, Medina-Franco JL: Molecular scaffold analysis of natural products databases in the public domain. Chem Biol Drug Des. 2012; 80(5): 717–724. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "35684",
"date": "11 Jul 2018",
"name": "Piotr Minkiewicz",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have no critical remarks concerning methods, correctness of work. Discussion is also appropriate from the point of view of scientists working in the areas of cheminformatics and/or pharmacology. I would like to ask some questions concerning relevance of the article for food science. The analysis performed reveals similarity in structural and physico-chemical features between compounds from FooDB and DrugBank. Does it mean that more detailed studies may reveal similar biological activity (i.e. interactions with the same target) of drugs and bioactive food components. Are Authors’ results consistent with these published in the following articles concerning similarity of effects of drugs and food components? Jensen K. et al. PLoS Comput Biol, 10, (2014)1 Jensen K. et al. PLoS Comput Biol, 11, (2015)2 Proteins interacting with polyphenols and described in the following article: Lacroix S. et al. Sci Rep, 8, (2018)3 are also annotated in DrugBank as drug targets. Is the above finding consistent with the Authors’ conclusions? I would like to ask Authors to add few sentences concerning limitations of the proposed methodology (for instance limitations occurring due to presence of activity cliffs).\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3828",
"date": "16 Jul 2018",
"name": "José L. Medina-Franco",
"role": "Author Response",
"response": "Thank the reviewer for critically reading our manuscript and the valuable feedback. Hereunder we provide a point-by-point response to each comment. Comment: \"I have no critical remarks concerning methods, correctness of work. Discussion is also appropriate from the point of view of scientists working in the areas of cheminformatics and/or pharmacology. I would like to ask some questions concerning relevance of the article for food science. The analysis performed reveals similarity in structural and physico-chemical features between compounds from FooDB and DrugBank. Does it mean that more detailed studies may reveal similar biological activity (i.e. interactions with the same target) of drugs and bioactive food components.\" Response: We agree with the valuable input. Indeed, as the reviewers points out, similar physico-chemical properties between compounds from FoodDB and DrugBank encourages additional systematics investigations for bioactivity of food components. In the revised version of the manuscript, that is under editing and will be uploaded in due course, we will expand the discussion of the manuscript elaborating more on the significance of the work. Comment: \"Are Authors’ results consistent with these published in the following articles concerning similarity of effects of drugs and food components? Jensen K. et al. PLoS Comput Biol, 10, (2014)1 Jensen K. et al. PLoS Comput Biol, 11, (2015)2' Response: We are grateful to the reviewer for pointing out the two papers of Jensen K. et al. As stated in the manuscript, the goal of this study was to characterize the chemical content, diversity and complexity of the chemical structures of a large and public database of food chemicals. The studies of Jensen et al. are focused on finding food-disease associations and food-drug interactions. Following the reviewers advice, we addressed this comment in the revised manuscript stating that as a Perspective of our current work, the FooDB can be used to further augment the current knowledge of food-disease associations and food-drug interactions. The two suggested references are being added to the revised manuscript. Comment: \"Proteins interacting with polyphenols and described in the following article: Lacroix S. et al. Sci Rep, 8, (2018)3 are also annotated in DrugBank as drug targets. Is the above finding consistent with the Authors’ conclusions?\" Response: Thank the reviewer for bring to our attention the work of Lacroix S. et al. Our results are consistent with this study. In particular, the number of polyphenol compounds found in the FooDB is larger than the amount of compounds found in the Phenol-Explorer database. This point is being addressed in section “3.4.1. Polyphenols” of the revised manuscript. In the revised manuscript we added the suggested reference. In addition, in the Conclusions section, we are also stating that the set of polyhpenols from FooDB identified in this work can further enrich the on-going efforts of the polyphenol-protein interactome studies such as the one published by Lacroix S. et al. Comment: \"I would like to ask Authors to add few sentences concerning limitations of the proposed methodology (for instance limitations occurring due to presence of activity cliffs).\" Response: Following the reviewers’ advice, we added a discussion of the limitations of the methodology addressing the caution that needs to be taken while dealing with activity cliffs. Relevant references to activity cliffs are being added."
}
]
},
{
"id": "36288",
"date": "30 Jul 2018",
"name": "Khushbu Shah",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript purports to analyze and disclose the chemical diversity of the FooDB database. It is an interesting study with a logical flow based on appropriate methods.\nThere a few optional suggestions that the authors could adapt in the manuscript:\nIt would be advisable for the authors to add the rationale behind selecting the three versions – GRAS, DrugBank and ZINC for data curation. Since acyclic compounds represented the most common scaffold in FooDB, the authors could expand upon the types of functional groups commonly observed in these acyclic compounds in FooDB. Further, the authors point out that there are more polyphenols in FooDB vs. Phenol-explorer. The authors could include the dataset from Phenol-explorer in a consensus diversity plot (like Figure 4) to clearly represent their reults.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3882",
"date": "10 Aug 2018",
"name": "José L. Medina-Franco",
"role": "Author Response",
"response": "We really appreciate the reviewer´s feedback and value the optional suggestions. In the revised manuscript we added the rationale for selecting the ´specific version of the three data sets. We also included a comment that a systematic analysis of the functional groups present in the acyclic structures is highly relevant. This excellent suggestion, as well as the comparison of the polyphenols in FooDB with those in Phenol-explorer, will be reported in a follow-up study."
}
]
},
{
"id": "36226",
"date": "07 Aug 2018",
"name": "Rachelle J. Bienstock",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper on chemical diversity of FooDB compared to several other databases, including GRAS and DrugBank and drug-like natural products fromZINC12, by Naveja, Rico-Hidalgo, and Medina-Franco was an interesting, informative and nicely presented analysis. The figures and graphical presentation of ChemMaps results in particular is very clear. One thing which I think would be interesting for a further study and analysis, (since epigenetics and some other diseases and health implications are mentioned in regards to polyphenols) is an analysis regarding vitamins and other compounds and dietary supplements which have had specific health claims made. ChemMaps analysis of these compounds according to properties in these databases and correlation with biological pathways would be interesting for future work.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3883",
"date": "10 Aug 2018",
"name": "José L. Medina-Franco",
"role": "Author Response",
"response": "We are grateful for the positive comments and thank the reviewer for the excellent suggestions to expand this work in future studies."
}
]
}
] | 1
|
https://f1000research.com/articles/7-993
|
https://f1000research.com/articles/7-1240/v1
|
10 Aug 18
|
{
"type": "Research Article",
"title": "Exploring machine learning: A bibliometric general approach using Citespace",
"authors": [
"Juan Rincon-Patino",
"Gustavo Ramirez-Gonzalez",
"Juan Carlos Corrales",
"Juan Rincon-Patino",
"Juan Carlos Corrales"
],
"abstract": "Background: Machine learning researches algorithms that allow a machine to learn about resolving problems in different application domains. Due to the wide number of machine learning applications, it is necessary for newcomers to the field to have alternatives to explore this field faster. Methods: In this paper, we present a science mapping analysis on the machine learning research in the period 2007-2017. This study was develop using the CiteSpace tool based on results from Clarivate Web of Science. This analysis shows how the field has evolved, by highlighting the most notable authors, institutions, keywords, countries, categories, and journals. Results: The results provide information on trends and possibilities in the near future, particularly in areas such as health, biology and banking, where machine learning is a valuable tool to generate solutions. Conclusions: Machine learning is being widely studied, and several institutions in countries like the USA and China constantly generate machine learning based solutions. Diseases, such as cancer or Alzheimer’s disease, studies in biology, such as the protein molecule, virtual reality, commerce, smartphones, and ubiquitous computing, are all fields where machine learning contributes to resolving problems.",
"keywords": [
"machine learning",
"science mapping",
"bibliometrics",
"topic analysis",
"citeSpace"
],
"content": "Introduction\n\nMachine learning is a computer science field that studies the learning processes of humans and replicates themusing machines. Different algorithms allow a machine to learn and use the acquired knowledge to resolve several problems that society faces. This field is widely studied and there exists a huge number of articles that present machine learning applications. Consequently, in the present study, we seek to create a generic map about machine learning applications, which allows newcomers to know the fields that are being explored and use machine learning techniques. In this study, we carried out a science mapping analysis of the existing research on machine learning. As a starting point, we find that bibliometrics is a relevant tool to analyze academic research developed on different topics. Bibliometric analyses contribute to the progress of science in many different ways1, for example, by allowing evaluation of progress to be made, identifying trustworthy sources of scientific publications, laying the academic foundation for assessing new developments, or identifying major scientific actors. Performance analysis and science mapping are two bibliometric approaches used to explore a research field2. While performance analysis is an interesting way to evaluate the impact of published papers, based on their citations, science mapping aims at exhibiting the structure of scientific research, showing its evolution and dynamical aspects3.\n\nThe present study performs a science mapping analysis; however, this is not the only approach to discover tendencies or to give an overview of a topic. We can find existing literature reviews on specific machine learning topics such as algorithms4, applications into visual analytics5, and recommendation systems6. There are other reviews on applications for different fields, such as medical diagnosis7, radiation oncology8, semantic web9, models for quality prediction10 and methods for text categorization11. Also, it was possible to find a general review on machine learning12, but without a science mapping analysis, as this study performs. In 3 we find a bibliometric analysis related to machine learning, but this work only focuses on reviewing the state of the research carried out by the journal Knowledge-Based Systems (KnoSys) from 1991 to 2014. 13 and 14 use this method in the medical field, while 15 carries out an analysis in the social work area and 16 in the intelligent transportation systems research. Furthermore, there are other approaches and important analyses for providing an overview of a topic or finding its trends, using text mining or Latent Dirichlet allocation, such as in 17 and 18, among others.\n\nThis article has the following structure: In the Methods section, we describe the methodology, the dataset extracted, the tool configuration, and how the analysis was performed. The Results section presents the results of the science mapping analysis. The conclusions are given at the end of the article.\n\n\nMethods\n\nWe used Web of Science (WOS) Core Collection. This is one of the primary databases for scientific literature in the scientific world. We looked, in the third quarter of 2017, for papers and conferences about machine learning, using that concept as a keyword (‘machine AND learning’), with results ranging from 2007 to 2017 Q2 (published papers up the second quarter of the year). We used the 'All databases' option to have a complete results list. Finally, the results were sorted by date. All the articles, between 2007 and 2017, were taken into account for performing the analysis with the aim of obtaining a general vision of the field.\n\nWe obtained 41,962 records from WOS Core Collection that were downloaded as plain text including the full record and cited references. The files were named as 'download' with .txt as the file extension. Figure 1 shows a summary of the records.\n\nIn CiteSpace version 5.1.R8 SE19–21, we used the records from WOS database and set a time slicing from 2007–2017, using one year per slice and the default Citespace configuration in term type, links and selection criteria options. We also used the title, abstract, author keywords and keyword plus as term sources. We changed the size of the generated network to fit the graphs, so we reduced the number of documents that were part of the top cited ones on each slice. The Top N configured for the networks are presented below each figure.\n\nCiteSpace allows us to detect and visualize emerging trends and transient patterns in the scientific literature20; for this purpose, we applied three types of bibliometric techniques as in 22. First, co-author analysis, which investigates leading authors that are cited together23. It uses the authors’ names, affiliation countries and institutions as units of analysis and then it shows the author, institution and country co-occurrences. Second, co-word analysis to establish links between documents24, through keyword and category co-occurrences. Third, co-citation analysis that provides, as a result, the cited author, cited-reference and cited journal co-occurrences.\n\n\nResults\n\nA co-authorship analysis was done to explore the authors who have the greatest bibliographic production in the field of machine learning. Figure 2 shows the resulting network. The network has 301 nodes and 336 links. Each node represents an author, and its width indicates the number of author's publications proportionally. The connections between the nodes represent co-authorship of papers and their width suggests the proportion of the cooperative relationships. Finally, the different colors of the nodes and links represent the years between 2007 and 2017(Q2). From Figure 2, following a precise analysis supported in CiteSpace and without an additional analysis of duplicates, it can be highlighted that Wang Y, Zhang Y, Liu Y and Zhang L are the authors that have published the highest number of papers on machine learning.\n\nAfter the previous co-authorship analysis, it was relevant to study the authors’ institutions and countries. Figure 3 shows a network with the leading countries in which machine learning is an important subject of study, and the relationships between them. The network has 23 nodes and 85 links. From Figure 3, we can observe that the United States of America (USA) is the most productive country, followed by the People’s Republic of China, Germany, and England. Regarding the distribution, 24,761 papers correspond to the USA, 10,808 to China, 4,479 to Germany, 4,365 to England, 3,866 to India, 3,407 to Spain and 3,045 to Canada. The nodes with the highest centrality, as indicated by purple rings, suggest that the USA plays a major role in machine learning research with authors from other countries, followed by Canada, England, Brazil and Australia. The centrality of these nodes is 0.44 for the USA, 0.42 for Canada, 0.23 for England, 0.18 for Brazil and 0.16 for Australia.\n\nFigure 4 shows the institutions' network, which presents the organizations with the highest production of articles on machine learning. The network has 54 nodes and 159 links. The Chinese Academy of Sciences, Carnegie Mellon University, Stanford University, Massachusetts Institute of Technology, Nanyang Technological University, University of California and Harvard University are part of the institutions that have published the largest number of articles. Additionally, Harvard University (0.17), Stanford University (0.12), Massachusetts Institute of Technology (0.12) and Columbia University (0.11) have the highest centrality, which means that they occupy key positions on the relevant paths in machine learning research.\n\nTo find the main subjects of the publications and, due to the fact that during the last decade the topics in machine learning research may have changed, a co-category analysis was performed. We did a preliminary analysis, using the categories generated by WOS, as shown in Table 1.\n\nCOMPUTER SCIENCE ARTIFICIAL INTELLIGENCE (12,594, 30.013%) and ENGINEERING ELECTRICAL-ELECTRONIC (10,715, 25.535%) are the two categories that have the highest number of publications, followed by COMPUTER SCIENCE THEORY METHODS, COMPUTER SCIENCE INFORMATION SYSTEMS and COMPUTER SCIENCE INTERDISCIPLINARY APPLICATIONS. Out of all these categories, we conclude that COMPUTER SCIENCE (and its sub-categories) is the leading one. Apart from this category, other relevant fields for research in machine learning may be biology, telecommunications and automation control systems.\n\nTo perform a deeper analysis, we built a network of co-occurring subject categories, as shown in Figure 5. The resulting network has 27 nodes and 80 links. COMPUTER SCIENCE - INTERDISCIPLINARY APPLICATIONS (0.47), COMPUTER SCIENCE (0.37), ENGINEERING (0.20) and MATHEMATICAL & COMPUTATIONAL BIOLOGY (0.18) are the nodes with the highest centrality, suggesting that they are the main topics that link machine learning studies carried out on different periods. We could find that COMPUTER SCIENCE - INTERDISCIPLINARY APPLICATIONS, due to its centrality value, is a relevant category between the other concepts. This means it can be the basis of future works.\n\nA keyword analysis allows us to observe emerging trends, since it provides information on the content of articles published on the subject. For this purpose, we constructed several networks of co-occurring keywords. First, we built a network with N=15, where N is the size of the top cited or occurred items from each slice (one year in this case). Figure 6 presents the resulting network, and has 23 nodes and 88 links. It is important to remember that each node in the network has several rings around it, and their colors refer to the years in which that keyword appears.\n\nThe most important keywords appearing in Figure 6, as ordered by their citation counts, are classification (5,546), support vector machine (3,347), algorithm (2,681) and neural network (2,450), followed by model (2,253), system (1,898), prediction (1,893), feature selection (1,559), data mining (1,282) and network (1,196). By their centrality, the main keywords are classification (0.56), support vector machine (0.18), pattern recognition (0.17) and neural network (0.10). From these keywords, we can observe that the classification algorithms, such as support vector machine, have been widely studied and represent an important intellectual turning point, acting as bridges that link concepts over different periods. We can find all the concepts connected to this main node. Other relevant algorithms are the ones used for regression purposes, such as neural networks, and the ones used for grouping purposes, such as k-nearest neighbors.\n\nSecond, a network of co-occurring keywords with N=50 was constructed, the resulting net being shown in Figure 7, with 95 nodes and 420 links. The keyword with the highest citation count appearing in the network is classification, with 5,546 citations, followed by support vector machine (3,347), algorithm (2,681), neural network (2,450), model (2,253), system (1,898), prediction (1,893), feature selection (1,559), data mining (1,335), network (1,304), recognition (1,283), regression (1,110), artificial neural network (1,048), random forest (971), identification (966), selection (935), optimization (853), classifier (818), genetic algorithm (743) and decision tree (675). This network highlights once again classification (centrality = 0.42) as a widely studied subject, being an important turning point between the other concepts and having a great potential for future works. The prediction keyword, with a centrality equal to 0.13, is another turning point in this network.\n\nLastly, using the net of co-occurring keywords presented in Figure 7, we applied a filter, eliminating subjects that are transversal (such as data or information) and elements that belong to the proper development of any work with machine learning (such as classification or random forest). Figure 8 shows the resulting network. The most important keyword appearing on the net, by its citation counts, is data mining (1,335), followed by pattern recognition (652), database (624), diagnosis (599), cancer (449) and big data (420). Other relevant keywords are Image (414), sentiment analysis (325), disease (240), bioinformatics (209), Alzheimer's disease (188), protein (170) and computer vision (131). In the network, we can observe that data mining is an important concept in the published works, and that machine learning is becoming relevant in the health field, for the diagnosis of diseases such as cancer or Alzheimer's, by using databases collected from different sources, such as EEG signals or multiple sensors.\n\nA co-citation analysis is an interesting way to measure the relationship between documents. It allows us to represent the proximity between the publications of the data set and the relevant cited articles in external sources. In this case, we did a journal co-citation analysis, which addresses the journals of the items analyzed. It is important to observe that, in this study, when we mention journals, we also include conference proceedings. Table 2 presents the top 10 source journals for machine learning research, based on the statistics from the WOS. LECTURE NOTES IN COMPUTER SCIENCE is the journal with the highest number of publications, having published 2,107 articles on machine learning research and being published by Springer, followed by LECTURE NOTES IN ARTIFICIAL INTELLIGENCE (1,132) and PROCEEDINGS OF SPIE (646). From Table 2, we can notice that no journal widely collects the publications made on the subject of machine learning. This dispersion in the journals confirms the multiple applications of machine learning.\n\nIn order to find the most important cited journals and to evaluate the influences and co-citation patterns of the studies in machine learning, we did a journal co-citation analysis, which resulted in the network shown in Figure 9. The network has 23 nodes and 90 links. Concerning co-citation frequency, the most influential journals are MACHINE LEARNING (15,767) and LECTURE NOTES IN COMPUTER SCIENCE (14,684), followed by BIOINFORMATICS (14,067), IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE (11,586) and NUCLEIC ACIDS RESEARCH (10,949).\n\nTo identify and to analyze the relationships between authors who have works cited in other publications and the evolution of research communities, we performed an author co-citation analysis. Figure 10 shows the resulting author co-citation network, which has 29 nodes and 131 links. Leo Breiman, a statistician at the University of California, is the author with the highest number of citations (5,270), followed by John Ross Quinlan (2,442), Bernhard Scholkopf (2,125), Vladimir N. Vapnik (2,043), Corinna Cortes (1,948) and Mark Hall (1,897).\n\nA reference co-citation analysis allows us to observe which one is the most cited reference in the articles that belong to the dataset used. Figure 11 shows the resulting network of the reference co-citation analysis. The network has 56 nodes and 235 links. Of these references, HALL M (2009), WITTEN IH (2005) and CHIH-CHUNG CHANG (2011) occupy the top three positions (with citations counts equal to 1089, 1039 and 928, respectively) followed by PEDREGOSA F (2011) and HASTIE TREVOR (2009). The nodes with the highest centrality are BISHOP CM (2006, 0.27), DEMSAR J (2016, 0.26), HASTIE TREVOR (2009, 0.24) and WITTEN IH (2005, 0.22), showing their publication year and centrality. This suggests they are important turning points between the other nodes and interesting references for future publications.\n\n\nConclusions\n\nUnderstanding the dynamics of the machine learning field has practical and significant implications for researchers from different disciplines. In this study, we developed a science mapping analysis of machine learning. From this integrative approach, we identified the trends, state, and evolution in the field. From the results obtained, we can conclude that the USA is the most productive country in the field of machine learning, with double the publications of the People's Republic of China. The Chinese Academy of Sciences, Carnegie Mellon University, Stanford University, Massachusetts Institute of Technology, Nanyang Technological University, University of California, and Harvard University are part of the institutions that have published the largest number of articles. It is useful to mention that Machine Learning, Lecture Notes in Computer Science and Bioinformatics are the journals with most frequently cited documents. However, no journal widely collects publications written on the subject. There are a wide number of topics that have attracted the interest of scientists and could continue to be important in the future: diseases, such as cancer or Alzheimer’s disease, studies in biology, such as the protein molecule, virtual reality, commerce, smartphones and ubiquitous computing, are all important themes related to the applications of machine learning as shown by this study. This shows that machine learning can improve a large number of applications in society.\n\n\nData availability\n\nDataset 1: Data obtained from Web of Science and Citespace project file, to be opened in Citespace. DOI, 10.5256/f1000research.15619.d21242625",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors are grateful to the Telematics Engineering Group (GIT) of the University of Cauca for scientific support and Innovacción Cauca project for master's scholarship granted to J. Rincon-Patino.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nMartínez MA, Cobo MJ, Herrera M, et al.: Analyzing the Scientific Evolution of Social Work Using Science Mapping. Res Soc Work Pract. 2015; 25(2): 257–277. Publisher Full Text\n\nNoyons ECM, Moed HF, Luwel M: Combining mapping and citation analysis for evaluative bibliometric purposes: A bibliometric study. J Am Soc Inf Sci. 1999; 50(2): 115–131. Publisher Full Text\n\nCobo MJ, Martínez MA, Gutiérrez-Salcedo M, et al.: 25 years at Knowledge-Based Systems: A bibliometric analysis. Knowl Based Syst. 2015; 80: 3–13. Publisher Full Text\n\nMuhamedyev RI: Machine learning methods: An overview. Comput Model NEW Technol. 2015; 19(6): 14–29. Reference Source\n\nEndert A, Ribarsky W, Turkay C, et al.: The State of the Art in Integrating Machine Learning into Visual Analytics. Comput Graph Forum. 2017; 36(8): 458–486. Publisher Full Text\n\nKim MC, Chen C: A scientometric review of emerging trends and new developments in recommendation systems. Scientometrics. 2015; 104(1): 239–263. Publisher Full Text\n\nKononenko I: Machine learning for medical diagnosis: history, state of the art and perspective. Artif Intell Med. 2001; 23(1): 89–109. PubMed Abstract | Publisher Full Text\n\nBibault JE, Giraud P, Burgun A: Big Data and machine learning in radiation oncology: State of the art and future prospects. Cancer Lett. 2016; 382(1): 110–117. PubMed Abstract | Publisher Full Text\n\nPrice S: A review of the state of the art in Machine Learning on the Semantic Web. Proc 2003 UK Work Comput Intell. 2004; 292–299. Reference Source\n\nAl-Jamimi HA, Ahmed M: Machine Learning-Based Software Quality Prediction Models: State of the Art. In 2013 International Conference on Information Science and Applications (ICISA). 2013; 1–4. Publisher Full Text\n\nDasari DB, Venu Gopala Rao K: Text Categorization and Machine Learning Methods: Current State Of The Art. Glob J Comput Sci Technol. 2012. Reference Source\n\nFlach PA: On the state of the art in machine learning: A personal review. Artif Intell. 2001; 131(1–2): 199–222. Publisher Full Text\n\nMoral-Muñoz JA, Cobo MJ, Peis E, et al.: Analyzing the research in Integrative & Complementary Medicine by means of science mapping. Complement Ther Med. 2014; 22(2): 409–418. PubMed Abstract | Publisher Full Text\n\nChen C, Hu Z, Liu S, et al.: Emerging trends in regenerative medicine: a scientometric analysis in CiteSpace. Expert Opin Biol Ther. 2012; 12(5): 593–608. PubMed Abstract | Publisher Full Text\n\nMartínez MA, Cobo MJ, Herrera M, et al.: Analyzing the Scientific Evolution of Social Work Using Science Mapping. Res Soc Work Pract. 2015; 25(2): 257–277. Publisher Full Text\n\nCobo MJ, Chiclana F, Collop A, et al.: A Bibliometric Analysis of the Intelligent Transportation Systems Research Based on Science Mapping. IEEE Trans Intell Transp Syst. 2014; 15(2): 901–908. Publisher Full Text\n\nZhang Y, Chen H, Lu J, et al.: Detecting and predicting the topic change of Knowledge-based Systems: A topic-based bibliometric analysis from 1991 to 2016. Knowl Based Syst. 2017; 133(Supplement C): 255–268. Publisher Full Text\n\nZhang Y, Zhang G, Chen H, et al.: Topic analysis and forecasting for science, technology and innovation: Methodology with a case study focusing on big data research. Technol Forecast Soc Change. 2016; 105: 179–191. Publisher Full Text\n\nChen C: Information Visualization: Beyond the Horizon. Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2006. Publisher Full Text\n\nChen C: CiteSpace II: Detecting and visualizing emerging trends and transient patterns in scientific literature. J Am Soc Inf Sci Technol. 2006; 57(3): 359–377. Publisher Full Text\n\nChen C: Searching for intellectual turning points: Progressive knowledge domain visualization. Proc Natl Acad Sci. 2004; 101 Suppl 1: 5303–5310. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSong J, Zhang H, Dong W: A review of emerging trends in global PPP research: analysis and visualization. Scientometrics. 2016; 107(3): 1111–1147. Publisher Full Text\n\nMcCain KW: Cocited author mapping as a valid representation of intellectual structure. JASIS. 1986; 37(3): 111–122. Publisher Full Text\n\nRip A, Courtial JP: Co-word maps of biotechnology: An example of cognitive scientometrics. Scientometrics. 1984; 6(6): 381–400. Publisher Full Text\n\nRincon-Patino J, Ramirez-Gonzalez G, Corrales JC: Dataset 1 in: Exploring machine learning: A bibliometric general approach using Citespace. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15619.d212426"
}
|
[
{
"id": "37591",
"date": "18 Sep 2018",
"name": "Sally Ellingson",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper uses a mapping analysis to summarize machine learning literature from 2007-2017. They create a dataset by extracting papers and conference material from the Web of Science collection using the keywords ‘machine AND learning’ which resulted in 41,962 records. They used CiteSpace to visualize the data in several different ways: publications per year, co-authorship network, country network, institution network, co-occurring subjects and keywords, journal co-citation network, etc. The presented graphics include a wealth of information using various node sizes and colorings by year. The work is clearly and accurately presented with some current literature cited. The methods are clearly defined and their dataset and project files to recreate the research are given in a link. I think the paper gives an interesting overview of the directions of machine learning and important researchers, research hubs, and domain topics. It also presents a study that can be followed for looking at any research area. I would suggest doing another proofread, but find the article to be technically sound and interesting.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "37594",
"date": "24 Sep 2018",
"name": "Chaomei Chen",
"expertise": [
"Reviewer Expertise I am the designer of CiteSpace",
"the tool used in this study."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe description of the process is clear. The interpretation of the results is accurate.\nI recommend the authors to consider the following options to strengthen the study:\nSearch query\nThe data collection used \"machine AND learning\". A more robust search query should take into account additional keywords that may be important to ensure an adequate coverage, for example, AI or deep learning.\n\nVersions of CiteSpace\nIt is mentioned in text that CiteSpace version 5.1.R8 SE was used. However, several figures show the signature of version 4.4.R1.\n\nCoauthorship network\nMore recent versions of CiteSpace support the use of fullnames of authors as opposed to using initials and the lastname. Using fullnames is preferable in such cases.\n\nBurst detection\nBurst detection may be a good addition to the study. For example, it will provide more specific information on which institutions are particularly active in recent years.\n\nDual-Map Overlay\nAnother potentially useful function is the dual-map overlay feature. It allows researchers to identify where relevant studies are published and which areas are highly influential in terms of how they are cited.\n\nCo-citation networks\nCiteSpace has several more specific functions to analyze co-citation network, for example, generating clusters and automatically selected appropriate cluster labels. These functions are highly recommended for this type of study.\nUsing the dataset shared by the authors, I created a visualization to illustrate how one may take advantage of these functions for such studies:\nhttp://cluster.ischool.drexel.edu/~cchen/citespace/images/f1000/f1000.png\nIn summary, the current study is clearly reported and should be reproducible. On the other hand, there are several functions that are readily available in CiteSpace but they are not utilized in the current study. I hope the authors may consider updating their studies with the features I recommended here.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1240
|
https://f1000research.com/articles/7-1239/v1
|
10 Aug 18
|
{
"type": "Research Article",
"title": "Relationship between levels of the heavy metals lead, cadmium and mercury, and metallothionein in the gills and stomach of Crassostrea iredalei and Crassostrea glomerata",
"authors": [
"Asus Maizar Suryanto Hertika",
"Kusriani Kusriani",
"Erlinda Indrayani",
"Rahmi Nurdiani",
"Renanda B. D. S. Putra",
"Kusriani Kusriani",
"Erlinda Indrayani",
"Rahmi Nurdiani",
"Renanda B. D. S. Putra"
],
"abstract": "Background: The objective of this study was to compare the levels of heavy metals (Pb, Hg, and Cd) and metallothionein (MT) in the gills and stomach of two species of mussels (Crassostrea iredalei and Crassostrea glomerata), and to observe the ability of the mussels to absorb the heavy metals Pb, Hg and Cd present in the water. Methods: The mussels were obtained from Mayangan, Kenjeran and Gresik ports, East Java, Indonesia. MT levels were determined using ELISA. Heavy metal levels of Pb, Hg and Cd were assayed using atomic absorption spectrophotometry. Results: The levels of Pb and Cd in water were below the maximum permissible levels for local water quality standards. By contrast, the level of Hg in the water was above the maximum permissible levels for water quality standards. At Mayangan Port (Station 1), the level of Pb was higher than Hg and Cd. Levels of MT and heavy metals varied greatly among of C. iredalei and C. glomerata individuals, but were always higher in the gills than in the stomach. The highest MT level (160,250 ng/g) was observed at Kenjeran Port (Station 2). MT levels were shown to be significantly associated with heavy metal level (P<0.0001). Conclusions: This result indicates that MT may be responsible for the sequestration of these heavy metals, as has already been observed in terrestrial animals.",
"keywords": [
"Heavy metal",
"Biomarker",
"Metallothionein",
"Crassostrea iredalei and Crassostrea glomerata"
],
"content": "Introduction\n\nPollution occurring in coastal environments is mainly caused by human and industrial activity, and has become a matter of concern over the last few decades1,2. Common chemical pollutants, including heavy metals, such as Cd, Hg and Pb, are considered to be toxic and harmful pollutants. Heavy metal pollution may have devastating effects on both the ecological environment and aquatic organisms3. The organisms and biomass contaminated with heavy metals could eventually affect human health4–6.\n\nAccumulation of heavy metals in marine organisms can be considered as an important pathway of the transfer of heavy metals7. As a marine bivalve, the suspension-feeding activity of mussels represent the main pathway for heavy metal uptake and accumulation8,9. Mussels are suspension feeders, both aqueous and dietary, such as material suspended from sediments consisting of high-molecular-weight substances, microorganisms, fecal pellets and detritus10,11. Mussels are commonly used to assess the eco-toxicological effects of the products released by anthropogenic activities12–14. In a previous study, mussels were used to evaluate in situ metal contamination in wastewater effluence and other aquatic ecosystems15,16. The concentration of metal in the tissue of mussels increased concomitantly with the elevation of metal absorption or uptake, and the various metal bioaccumulation levels were observed in different tissues of mussels17,18.\n\nMetallothionein (MT) plays a prime role as a response to heavy metal that accumulated in mussel. MT is well-known as a biomarker of heavy metal pollution in aquatic organisms19–22. MT is a heavy-metal-binding protein mostly synthesized by bivalves as a response to the presence of heavy metals. It functions to remove divalent bonds formed by heavy metals and metalloids23. In another study by Gagnon et al. in 201424, MT was also found to bind reactive oxygen species such as nitric oxide, therefore released during the process of inflammation. Furthermore, the accumulation of heavy metals may induce oxidative stress which promotes the substantial impairment of lipid function in mussel tissues. Furthermore, the accumulation of heavy metals in mussels can also directly affect the health of the bivalve without elevating heavy metal concentration in bivalve tissues25.\n\nIn a previous study by Raspor et al.26, Crassostrea iredalei and Crassostrea glomerata were used as biomarkers for monitoring heavy metal pollution based on MT level. MT was synthesized differently among bivalve tissues. The gills and stomach of the bivalve were used to examine the heavy metal pollutant levels. However, the specific relationship between each heavy metal (Pb, Hg, Cd) and MT levels in the gills and stomach is largely unknown. In the present study, we therefore determined the relationship between the accumulation of heavy metals (Pb, Hg Cd) and MT levels in the gill and stomach of Crassostrea iredalei and Crassostrea glomerata obtained from coastal environments in East Java, Indonesia (Mayangan Port, Kenjeran Beach, and Gresik Port). This study can also be used to assess the management policy strategies of East Java coastal in an effort to minimize coastal environment pollution.\n\n\nMethods\n\nMussels (C. glomerata and C. iredalei) were collected from the north coast area of East Java such as Mayangan Port (Probolinggo), Kenjeran Beach (Surabaya), and Gresik Port (Gresik). Sub-stations 1,2 and 3 in Mayangan are located geographically at 7°44’12.70’’ S, 113°12’41.54’’ E; 7°43’39.94’’ S, 113°13’19.87’’ E; and 7°44’18.08’’ S, 113°13’40.44’’ E, respectively. At Kenjeran Beach Surabaya, sub-stations 1, 2 and 3 are located geographically at 7°14’03.67’’ S, 112°47’44.28’’ E; 7°13’52.73’’ S, 112°47’38.72’’ E; and 7°13’41.38’’ S, 112°47’31.14’’ E, respectively. Sub-stations 1, 2 and 3 of Gresik Port are located geographically at 7°13’27.61’’ S, 112°40’57.90’’ E; 7°13’28.98’’ S, 112°41’10,24’’ E; 7°13’23.13’’ S, 112°40’21.07’’ E, respectively. The three samples of gill and stomach tissue of both C. glomerata and C. iredalei were collected from three sub-stations during the lowest low tide at the intertidal area of each sampling station.\n\nHeavy metals (Pb, Cd, and Hg) were examined from samples of seawater and tissues of mussels (gill and stomach) from each sampling station. The seawater was collected and filtered through a 0.45-mm polycarbonate membrane Nucleopore filter (Millipore) into a glass bottle to prevent contamination or metal absorption. Nitric acid was added to the seawater to obtain a pH lower than 2. The tissue samples were prepared according to established method27. In order samples can be oxidized completely and to destruct organic substances at low temperatures to avoid evaporating mineral loss, 0.2 g of gill or stomach tissues was added to 2 ml HNO3 (1 M) (Fluka) and incubated for 30 min. Afterward, the tissue samples were centrifuged for 15 min at 12,000g. The supernatant was collected and the heavy metals content were determined using a Varian A220 Atomic Absorption Spectrophotometer (Varian, Inc.).\n\nBriefly, 0.5 g gills and stomach organs of C. iredalei and C. glomerata were washed three times with PBS solution and frozen at −20°C. Frozen tissues were then crushed and mixed with 3 ml homogenization buffer (0.5 M sucrose, 20 mM Tris-HCl buffer, pH 8.6, containing 0.01% β-mercaptoethanol). The homogenate was then centrifuged at 30.000g for 20 min to get supernatant containing MT. A total of 1.05 ml cold ethanol and 80 ml chloroform were then added per 1 ml of supernatant and this was centrifuged at 6000g for 10 min. The pellet produced was washed using ethanol, chloroform and homogenization buffer at ratio of 87:1:12, respectively. The pellet was then dried using nitrogen gas to complete evaporation before it was re-suspended in 300 ml of 5 mM Tris-HCL, 1 mM EDTA, pH 7. The concentration of the MT fraction was reduced to 4.2 ml (0.43 mM) by addition of 5,5 dithiobis(2-nitrobenzoic acid) in 0.2 M phosphate buffer, pH 8. The sulfhydryl concentration was reduced by incubating the mixture for 30 min at room temperature.\n\nThe MT content was determined using indirect ELISA. The coating antigen to coating buffer ratio used was 1:40. The solution was incubated overnight at 4°C. Afterward, the plate was washed six times using 100 μl PBS/0.2% Tween solution. Next, 100 μl primary antibody of IgG1 rabbit anti-MT (1:400) (Santa Cruz Biotechnology, Cat# J0410) was added to assay buffer. ELISA plate was then incubated at room temperature for 2 hours before it was washed six times with 200 μl 0.2% PBS. In total, 100 μl of polyclonal secondary antibody of IgG biotin anti-rabbit (1:800) (Santa Cruz Biotechnology, Cat# L061) was added to assay buffer. The mixture was incubated at room temperature for 1 hour and washed 6 times with 0,2% PBS. Next, 100 μl streptavidin horseradish peroxidase (1:800) was added to the assay buffer in order to detect the reagent for primary antibodies conjugated to biotin. The solution was incubated at room temperature in shaker incubator and then washed 6 times with 200 μl of 0,2% PBS Tween after 1 hour, 100 μl blue 3,3',5,5'-tetramethylbenzidine, as substrate for horseradish peroxidase, was added to each well and the plate was incubated for 20–30 min in a dark room. A reaction was considered to have occurred if the color of the solution changed to blue, indicating the presence of MT. The reaction was stopped by adding 100 μl 1 M HCl. At this stage, the blue solution becomes yellow. The absorbance was measured using an ELISA reader at 450 nm wavelength. The results were then converted using a standard curve to obtain the MT value.\n\nPhysicochemical analyses were done according to Standard Methods28. Dissolved oxygen concentration was determined by using Oxymeter (YSI PRO 20). Furthermore, pH-indicator strips Universal indicator (MERCK, CAT# HC000419) was measured pH in situ at the sampling stations. A Refractometer (RHS-10ATC, SINOTECH) was used to measure salinity. Temperature was determined by using thermometer-Hg.\n\nData analysis was performed using SPSS version 16. The association between Pb, Cd and Hg contents with MT value was determined using multiple regressions with variable Y was density or intensity, variable X1 was Pb content, X2 was Cd content and X3 was Hg content.\n\n\nResults and discussion\n\nThe heavy metal content (Pb, Cd and Hg) observed at three research stations (Mayangan, Kenjeran, and Gresik port) is shown in Figure 1. The level of heavy metal Pb was higher than Hg and Cd at all three sampling stations. The highest Pb and Cd value were observed at Kenjeran at around 0.036 mg/l and 0.012 mg/l, respectively. According to the Ministerial Decree of Living Environmental No 51 Year 2004 concerning water quality standard to heavy metal content, Hg content for aquatic environments should be no more than 0.003 mg/l, Pb no more than 0.05 mg/l and Cd no more than 0.01 mg/l.\n\nStation 1, Mayangan; Station 2, Kenjeran; Station 3, Gresik port.\n\nThe heavy metal concentration (Pb, Cd and Hg) in gill and stomach tissues of C. glomerata and C. iredalei is shown at Figure 2.\n\nHeavy metal (Pb, Cd and Hg) content in the gills and stomach of (a) Crassostrea iredalei and (b) Crassostrea glomerata at the three stations. Station 1, Mayangan; Station 2, Kenjeran; Station 3, Gresik port.\n\nStation 1, Mayangan; Station 2, Kenjeran; Station 3, Gresik port.\n\nStation 1, Mayangan; Station 2, Kenjeran; Station 3, Gresik port.\n\nMussels were used as candidate to determine the heavy metal concentration in seawater because mussels are filter feeders and settled/stationary29. Many studies have been conducted on the determination of the heavy metal level in mussel tissue as a pollutants monitoring tool30–33. Figure 2 shows that heavy metal levels were higher in the gills than in stomach of the mussels. The highest value of heavy metal in gill tissue of C. iredalei was obtained from Mayangan, with a Pb concentration 0.715–1.061 mg/l, followed by Cd at 0.168–0.269 mg/l, and Hg at 0.420–0.731 mg/l. In the stomach, heavy metal Pb was ranged at 0.352–0.600 mg/l, Cd at 0.099–0.149 mg/l, and Hg at 0.171–0.337 mg/l. Similar results were obtained from C. glomerata tissue. The highest value of heavy metals in gills was obtained at station 1 with Pb content 0.419–0.649 mg/l, followed by Cd at around 0.101–0.234 mg/l, and Hg 0.300–0.582 mg/l. The heavy metal levels of Pb, Cd and Hg in the stomach were 0.231–0.326 mg/l, 0.034–0.134 mg/l, and 0.077–0.308 mg/l, respectively.\n\nMeasurement of MT levels was performed using ELISA. C. iredalei and C. glomereta produced higher MT levels in the gills than in the stomach tissues The highest MT levels, around 160,250 ng/g, were observed from samples obtained from station 2 (Kenjeran). The highest MT level measured in Mayangan was 123.500 ng/g, while at Gresik port it was 111.500 ng/g.\n\nSimilar results were observed from C. glomerata samples. The highest MT level was obtained in gill of C. glomereta collected from Kenjeran at 159,000 ng/g. At Mayangan, the highest MT in the gills was around 121,800 ng/g, while at Gresik port was around of 108,900 ng/g. According to Ringwood et al.34, there was a positive association between the level of MT and that of heavy metal pollutants. Heavy metal pollutants cause systemic damage in organisms and induce MT production35. According to Rumahlatu et al.36, MT in mussels binds heavy metals, meaning that MT can be used as an indicator of pollution. Organic materials and heavy metals in seawater can accumulate in bivalves in the gills, kidneys, and stomach. Furthermore, organic materials accumulated in the mussels are secreted through the kidney, while the heavy metals may induce synthesis of MT in gills and stomach37. According to Suryono38, bivalves are able to detoxify heavy metals by synthesizing MT. As heavy metal accumulate in the body of the bivalve, MT synthesis reaches its maximum level. This event can be used to monitor environmental contamination by heavy metals39. Cu, Cd, and Zn in seawater have been reported to promote MT synthesis in different tissues, such as the digestive gland and gills of mussels40.\n\nThe relationship between the content of heavy metals and MT level was significant (P<0.0001). According to Sungkawa41, regression analysis basicly using two variables such as independent variable noted as X and dependent variable noted as Y. According to Amiard et al.20, regression analysis can be used to determine the most important parameters affecting MT level among natural factors (salinity, sex, season, total concentration protein) or contaminant factors. In the present study, multiple regression analysis of heavy metal concentration in seawater and the level of MT in the gills of C. iredalei resulted the equation as: Y = 52,051.866 – 30,919.060 (X1) + 139,589.243 (X2) + 146,797.196 (X3). The results showed that an increase in Pb (X1) by 1 ppm decreased MT level by 30,919.060 ng/g. Furthermore, an increase of Cd (X2) by 1 ppm would increase MT level to around 139,589.243 ng/g. Moreover, an increase in the level of Hg (X3) by 1 ppm would increase MT level by 146,797.196 ng/g.\n\nIn addition, we investigated the relationship between the level of heavy metals in seawater and MT levels in the stomach of C. iredalei was significantly associated (P<0.0001). The following multiple regression equation was produced: Y = 23,320.8 – 53,844.1 (X1) +268,073 (X2) + 658,306 (X3). The results showed the increased of Pb (X1) by 1 ppm would reduce the MT level to 53,844.1 ng/g. Furthermore, an increase of Cd (X2) and Hg (X3) concentration by 1 ppm would elevate the MT level to around 268,073 ng/g and 658,306 ng/g, respectively.\n\nDetermining pollution levels using MT has become of great interest in the marine environment, and MT is seen as potential biomarkers of metal exposure in molluscs and other marine organisms42. In previous study, MT were found and quantified in various tissues of Mytilus galloprovincialis, especially in the digestive gland and gills43. The results of a prior study showed that the MT content in the digestive gland of Mytilus galloprovincialis was significantly higher than that in the gills44.\n\nWe observed the relationship of heavy metal level with MT level in gill and stomach of C. glomerata. The heavy metal level has significant association (P<0.0001) with MT level in gill. Using multiple regression analysis, we obtained the following equation: Y = 48,092.338 – 29,404.578 (X1) +223,621.464 (X2) + 144,733.404 (X3). The results showed that an increase in Pb (X1) concentration by 1 ppm decreased the MT level in gills to 29,404.578 ng/g. An increased in the Cd (X2) concentration of 1 ppm elevate of MT level to 223,621.464 ng/g and the increased of Hg (X3) concentration 1 ppm elevated MT level to 144733.404 ng/g.\n\nFurthermore, the heavy metal level has significant association (p-value, 0.0001< 0.05) with MT level in stomach. On the basis of the results of multiple regression of heavy metal content in stomach of C. glomerata the following equation was obtained: Y = 15,279.782–4,991.670 (X1) +105,058.703 (X2) + 225,262.150 (X3). The results showed the increased of Pb (X1) concentration by 1 ppm would decrease MT level to 4,991.670 ng/g. Increasing Cd (X2) and Hg (X3) concentration by 1 ppm would elevate MT level to 105,058.703 ng/g and 225,262.150 ng/g, respectively.\n\nThe presence of heavy metals affected the level of MT because it has function to detoxify heavy metals. According to Rumahlatu et al.35, MT functions as a metal-binding protein that accumulates in the mussel body and can be used as a marker of heavy metal pollutants. Although many aquatic organisms produce MT, making them candidates for modeling heavy metal pollution, mussels have been shown to accumulate higher levels of heavy metals than other species because they are filter feeders. Thus, mussels are good candidates for investigation the heavy metal pollutant levels through levels of MT45. The differences in tissue distribution may be due to the changes in metabolism of protein or to protein levels in the digestive gland of mussels46. MT concentrations increased in the clam Ruditapes philippinarum and green mussel Perna viridis tissues after they were exposed to increasing concentrations of Cd in the laboratory47.\n\nThe water quality of seawater (temperature, acidity level (pH), dissolved oxygen (DO) and salinity at each station is shown in Table 1.\n\nppt, parts per thousand\n\nThe present study showed that the temperature of seawater ranged between 23.4–31°C. MT accumulation in the mussel body increases significantly during the dry season48. Temperature has a notable influence on heavy metal solubility. Increasing water temperature leads to the increased solubility of heavy metal solubility, which is toxic49. According to the Water Quality Standard of Ministerial Decree of Living Environment No.51 year 2004, normal temperature for the marine biota environment ranges between 28 and 30°C. In the present study, the pH value obtained was around 9. The pH was not suitable for bivalves because while the waters pH is high, the heavy metal in seawaters will be settled at the bottom and will absorbed by bivalves50, leading to death of the bivalve. The salinity result obtained ranged between 17 and 33 parts per thousand (ppt). According to KMNLH No. 51 Year 2004, the standard quality of seawater salinity is around 27–33 ppt. Distribution and concentration of heavy metal in waters environment will increase along with salinity value increase51. The dissolved oxygen concentration observed in the present study ranged from 3.85 to 8.9 mg/l. The dissolved oxygen also influences to heavy metal toxicity, as lower dissolved oxygen cocnentration promotes the elevation of toxicity of heavy metals in the water52.\n\n\nConclusion\n\nOn the basis of the results of this study, we conclude that there is significant relationship between heavy metal concentration in the seawater and MT levels in the gills and stomach of C. glomerata and C. iredalei (p-value, 0.0001< 0.05).\n\n\nData availability\n\nDataset 1. Raw data for heavy metal levels contained in mussels taken from each location. Data are organized by the Figure in which they appear. DOI: http://doi.org/10.5256/f1000research.14861.d21315553",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding for this study was provided by the General Directorate of Research and Development, Ministry of Research and Technology and Higher Education, Research Contract, Number: 063/SP2H/LT/DRPM/IV/2017.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe hereby wish to acknowledge the following individuals for their contribution to this work: The Head of the Laboratory of Physiology, Department of Medicine, University of Brawijaya, for granting us permission to carry out this work in their Molecular and Biotechnology unit.\n\n\nReferences\n\nDoney SC: The growing human footprint on coastal and open-ocean biogeochemistry. Science. 2010; 328(5985): 1512–1516. PubMed Abstract | Publisher Full Text\n\nValdés J: Heavy metal distribution and enrichment in sediments of Mejillones Bay (23° S), Chile: a spatial and temporal approach. Environ Monit Assess. 2012; 184(9): 5283–5294. PubMed Abstract | Publisher Full Text\n\nVélez D, Montoro R: Arsenic speciation in manufactured seafood products. J Food Prot. 1998; 61(9): 1240–1245. PubMed Abstract | Publisher Full Text\n\nFarombi EO, Adelowo OA, Ajimoko YR: Biomarkers of oxidative stress and heavy metal levels as indicators of environmental pollution in African cat fish (Clarias gariepinus) from Nigeria Ogun River. Int J Environ Res Public Health. 2007; 4(2): 158–165. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJärup L: Hazards of heavy metal contamination. Br Med Bull. 2003; 68(1): 167–182. PubMed Abstract | Publisher Full Text\n\nCaussy D, Gochfeld M, Gurzau E, et al.: Lessons from case studies of metals: investigating exposure, bioavailability, and risk. Ecotoxicol Environ Saf. 2003; 56(1): 45–51. PubMed Abstract | Publisher Full Text\n\nSoto-Jiménez MF, Arellano-Fiore C, Rocha-Velarde R, et al.: Trophic transfer of lead through a model marine four-level food chain: Tetraselmis suecica, Artemia franciscana, Litopenaeus vannamei, and Haemulon scudderi. Arch Environ Contam Toxicol. 2011; 61(2): 280–291. PubMed Abstract | Publisher Full Text\n\nPan K, Wang WX: Validation of biokinetic model of metals in the scallop Chlamys nobilis in complex field environments. Environ Sci Technol. 2008; 42(16): 6285–6290. PubMed Abstract | Publisher Full Text\n\nMetian M, Warnau M, Teyssié JL, et al.: Characterization of 241Am and 134Cs bioaccumulation in the king scallop Pecten maximus: investigation via three exposure pathways. J Environ Radioact. 2011; 102(6): 543–550. PubMed Abstract | Publisher Full Text\n\nGalimany E, Ramón M, Delgado M: First evidence of fiberglass ingestion by a marine invertebrate (Mytilus galloprovincialis L.) in a N.W. Mediterranean estuary. Mar Pollut Bull. 2009; 58(9): 1334–1338. PubMed Abstract | Publisher Full Text\n\nHull MS, Vikesland PJ, Schultz IR: Uptake and retention of metallic nanoparticles in the Mediterranean mussel (Mytilus galloprovincialis). Aquat Toxicol. 2013; 140–141: 89–97. PubMed Abstract | Publisher Full Text\n\nBocchetti R, Lamberti CV, Pisanelli B, et al.: Seasonal variations of exposure biomarkers, oxidative stress responses and cell damage in the clams, Tapes philippinarum, and mussels, Mytilus galloprovincialis, from Adriatic sea. Mar Environ Res. 2008; 66(1): 24–26. PubMed Abstract | Publisher Full Text\n\nGuidi P, Frenzilli G, Benedetti M, et al.: Antioxidant, genotoxic and lysosomal biomarkers in the freshwater bivalve (Unio pictorum) transplanted in a metal polluted river basin. Aquat Toxicol. 2010; 100(1): 75–83.. PubMed Abstract | Publisher Full Text\n\nBenedetti M, Gorbi S, Fattorini D, et al.: Environmental hazards from natural hydrocarbons seepage: integrated classification of risk from sediment chemistry, bioavailability and biomarkers responses in sentinel species. Environ Pollut. 2014; 185: 116–126. PubMed Abstract | Publisher Full Text\n\nGagnon C, Gagné F, Turcotte P, et al.: Exposure of caged mussels to metals in a primary-treated municipal wastewater plume. Chemosphere. 2006; 62(6): 998–1010. PubMed Abstract | Publisher Full Text\n\nGillis PL, Gagné F, McInnis R, et al.: The impact of municipal wastewater effluent on field-deployed freshwater mussels in the Grand River (Ontario, Canada). Environ Toxicol Chem. 2014; 33(1): 134–143. PubMed Abstract | Publisher Full Text\n\nArmstead M, Yeager JL: 6 In Situ Toxicity Testing of Unionids. Freshwater Bivalve Ecotoxicology. 2006; 135.\n\nJebali J, Chouba L, Banni M, et al.: Comparative study of the bioaccumulation and elimination of trace metals (Cd, Pb, Zn, Mn and Fe) in the digestive gland, gills and muscle of bivalve Pinna nobilis during a field transplant experiment. J Trace Elem Med Biol. 2014; 28(2): 212–217. PubMed Abstract | Publisher Full Text\n\nDallinger R: Invertebrate organisms as biological indicators of heavy metal pollution. Appl Biochem Biotechnol. 1994; 48(1): 27–31. PubMed Abstract | Publisher Full Text\n\nAmiard JC, Amiard-Triquet C, Barka S, et al.: Metallothioneins in aquatic invertebrates: their role in metal detoxification and their use as biomarkers. Aquat Toxicol. 2006; 76(2): 160–202. PubMed Abstract | Publisher Full Text\n\nSimoniello P, Filosa S, Riggio M, et al.: Responses to cadmium intoxication in the liver of the wall lizard Podarcis sicula. Comp Biochem Physiol C Toxicol Pharmacol. 2010; 151(2): 194–203. PubMed Abstract | Publisher Full Text\n\nProzialeck WC, Edwards JR: Early biomarkers of cadmium exposure and nephrotoxicity. Biometals. 2010; 23(5): 793–809. PubMed Abstract | Publisher Full Text\n\nFalfushynska HI, Gnatyshyna LL, Stoliar OB: Effect of in situ exposure history on the molecular responses of freshwater bivalve Anodonta anatina (Unionidae) to trace metals. Ecotoxicol Environ Saf. 2013; 89: 73–83. PubMed Abstract | Publisher Full Text\n\nGagnon C, Turcotte P, Trépanier S, et al.: Impacts of municipal wastewater oxidative treatments: Changes in metal physical speciation and bioavailability. Chemosphere. 2014; 97: 86–91. PubMed Abstract | Publisher Full Text\n\nOtter RR, McKinney D, Brown B, et al.: Bioaccumulation of metals in three freshwater mussel species exposed in situ during and after dredging at a coal ash spill site (Tennessee Valley Authority Kingston Fossil Plant). Environ Monit Assess. 2015; 187(6): 334. PubMed Abstract | Publisher Full Text\n\nRaspor B, Pavičić J, Branica M: Cadmium-induced proteins from mytilus galloprovincialis-polarographic characterization and study of their interaction with cadmium. Mar Chem. 1989; 28(1–3): 199–214. Publisher Full Text\n\nTrinchella F, Esposito MG, Simoniello P, et al.: Cadmium, lead and metallothionein contents in cultivated mussels (Mytilus galloprovincialis) from the Gulf of Naples (Southern Italy). Aquaculture Res. 2013; 44(7): 1076–1084. Publisher Full Text\n\nAPHA: Standard methods for the examination of water and wastewater. New York: American Public Health Association. 2005. Reference Source\n\nRamakritinan CM, Chandurvelan R, Kumaraguru AK: Acute Toxicity of Metals: Cu, Pb, Cd, Hg and Zn on Marine Molluscs, Cerithedia cingulata G., and Modiolus philippinarum H. 2012. Reference Source\n\nRegoli F: Trace metals and antioxidant enzymes in gills and digestive gland of the Mediterranean mussel Mytilus galloprovincialis. Arch Environ Contam Toxicol. 1998; 34(1): 48–63. PubMed Abstract | Publisher Full Text\n\nMale YT, Ch A, Nanlohy, et al.: Preliminary analysis of mercury content (Hg) at several shells types. Ind J Chem Res. 2014; 136–142.\n\nHutagalung HP: Heavy metal In Marine Environment. Pewarta Oceana. 1984; 9(1): 12–19.\n\nShaari H, Raven B, Sultan K, et al.: Status of Heavy Metals Concentrations in Oysters (Crassostrea sp.) from Setiu Wetlands, Terengganu, Malaysia. Sains Malaysiana. 2016; 45(3): 417–424. Reference Source\n\nRingwood AH, Hoguet J, Keppler C, et al.: Linkages between cellular biomarker responses and reproductive success in oysters--Crassostrea virginica. Mar Environ Res. 2004; 58(2–5): 151–155. PubMed Abstract | Publisher Full Text\n\nRumahlatu D, Corebima AD, Amin M, et al.: Kadmium dan Efeknya terhadap Ekspresi Protein Metallothionein pada Deadema setosum (Echinoidea; Echinodermata). Jurnal Penelitian Perikanan. 2012; 1(1): 26–35. Reference Source\n\nGosling E: Bivalve molluscs: biology, ecology and culture. John Wiley & Sons. 2008.\n\nSuryono CA: Bioakumulasi logam berat melalui sistim jaringan makanan dan lingkungan pada kerang bulu Anadara Inflata. ILMU KELAUTAN: Indonesian Journal of Marine Sciences. 2006; 11(1): 19–22. Reference Source\n\nAcker LA, McMahan JR, Gawel JE: The effect of heavy metal pollution in aquatic environments on metallothionein production in Mytilus sp. In Proceedings of the 2005 Puget Sound Georgia Basin Research Conference. 2005. Reference Source\n\nPrusa R, Svoboda M, Blastik O, et al.: Increase in content of metallothionein as marker of resistence to cisplatin treatment. Clin Chem. 2006; 52: A174–A175.\n\nGeret F, Cosson RP: Induction of specific isoforms of metallothionein in mussel tissues after exposure to cadmium or mercury. Arch Environ Contam Toxicol. 2002; 42(1): 36–42. PubMed Abstract | Publisher Full Text\n\nSungkawa I: Penerapan Analisis Regresi dan Korelasi dalam Menentukan Arah Hubungan Antara Dua Faktor Kualitatif pada Tabel Kontingensi. Jurnal Mat Stat. 2013; 13(1): 33–41. Reference Source\n\nRotchell JM, Clarke KR, Newton LC, et al.: Hepatic metallothionein as a biomaker for metal contamination: age effects and seasonal variation in European flounders (Pleuronectes flesus) from the Severn Estuary and Bristol Channel. Mar Environ Res. 2001; 52(2): 151–171. PubMed Abstract | Publisher Full Text\n\nSerafim MA, Bebianno MJ: Variation of metallothionein and metal concentrations in the digestive gland of the clam Ruditapes decussatus: sex and seasonal effects. Environ Toxicol Chem. 2001; 20(3): 544–552. PubMed Abstract | Publisher Full Text\n\nPetrović S, Ozretić B, Krajnović-Ozretić M, et al.: Lysosomal membrane stability and metallothioneins in digestive gland of Mussels (Mytilus galloprovincialis Lam.) as biomarkers in a field study. Mar Pollut Bull. 2001; 42(12): 1373–1378. PubMed Abstract | Publisher Full Text\n\nSembel L: Analisis Logam Berat Pb, Cd dan Cr Berdasarkan Tingkat Salinitas di Estuari Sungai Belau Teluk Lampung. Prosiding PERMAMA. 2011; 85–92. Reference Source\n\nLegras S, Mouneyrac C, Amiard JC, et al.: Changes in metallothionein concentrations in response to variation in natural factors (salinity, sex, weight) and metal contamination in crabs from a metal-rich estuary. J Exp Mar Bio Ecol. 2000; 246(2): 259–279. PubMed Abstract | Publisher Full Text\n\nShi D, Wang WX: Uptake of aqueous and dietary metals by mussel Perna viridis with different Cd exposure histories. Environ Sci Technol. 2005; 39(23): 9363–9369. PubMed Abstract | Publisher Full Text\n\nGhasemian S, Karimzadeh K, Zahmatkesh A: Metallothionein levels and heavy metals in Caspian Sea gammarid, Pontogammarus maeoticus (Crustacea, Amphipoda, Pontogammaridae). Aquaculture, Aquarium, Conservation & Legislation-International Journal of the Bioflux Society (AACL Bioflux). 2016; 9(1). Reference Source\n\nDhahiyat Y: Distribusi kandungan logam berat Pb dan Cd pada kolom air dan sedimen daerah aliran Sungai Citarum Hulu. Jurnal Perikanan Kelautan. 2012; 3(3). Reference Source\n\nEl Baidho Z, Lazuardy T, Rohmania S, et al.: Adsorpsi Logam Berat Pb Dalam Larutan Menggunakan Senyawa Xanthate Jerami Padi. Prosiding SNST Fakultas Teknik. 2013; 1(1). Reference Source\n\nKavun VY, Shulkin VM, Khristoforova NK: Metal accumulation in mussels of the Kuril Islands, north-west Pacific Ocean. Mar Environ Res. 2002; 53(3): 219–226. PubMed Abstract | Publisher Full Text\n\nSuwarno FAR, Rahayu E, Nanik S, et al.: ELISA Teori dan Protokol. Universitas Airlangga: Surabaya. 2010.\n\nHertika A, Kusriani K, Indrayani E, et al.: Dataset 1 in: Relationship between levels of the heavy metals lead, cadmium and mercury, and metallothionein in the gills and stomach of Crassostrea iredalei and Crassostrea glomerata. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.14861.d213155"
}
|
[
{
"id": "37089",
"date": "20 Aug 2018",
"name": "Ima Yudha Perwira",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSummary This study is studying about comparison of heavy metal level (Pb, Hg, and Cd) in the gills and stomach of two different mussel species: Crassostrea iredalei and Crassostrea glomerata. This study also explains about the absorbance capacity of those mussels to Pb, Hg, and Cd. The result showed that MT level in the gills in both mussels are higher than that in the stomach, which is correspond to the higher heavy metal content in the gill than that in the stomach. This result indicate the relation between the MT production and heavy metal level in mussel.\nQuestion 1: This study is clear and accurate. The literature used by the author are also correspond to the article.\nQuestion 2: This study showed appropriate design. The selection of study site in several place (Probolinggo, Surabaya, and Gresik) is suitable, since the high population of heavy metal industries in those area.\nQuestion 3: The methods and analysis used by the author is proper to be use by another author. The using of ELISA technique is common to be used to analyze MT level in marine bivalves.\nQuestion 4: The statistical analysis and its interpretation are also correct. Therefore, there is no doubt in it.\nQuestion 5: The source data underlying the results available to ensure full reproducibility.\nQuestion 6: The author have concluded the results in very simple and easy to be understand sentence.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "37088",
"date": "31 Aug 2018",
"name": "Akhmad Taufiq Mukti",
"expertise": [
"Reviewer Expertise Aquaculture Biotechnology",
"especially Fish Genetics and Reproduction"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nApproved with notes:\nIn the ‘Introduction, the final sentence of paragraph 1 is not related to the previous sentence. I suggest that the authors need a statement about the direct and indirect influences of heavy metals on ecological environment and aquatic organisms. In ‘Methods’, I suggest that the authors describe the reasons for selecting a sampling location. In ‘Methods’, I suggest that the authors describe the reasons for selecting gills and stomach as a sample organs. In ‘Results and discussion’, the authors have not described a discussion based on results in “Heavy metal content in seawater” and in “Heavy metal analysis in gill and stomach. The authors used heavy metals of Pb, Hg and Cd, why use these three heavy metals as indicators, not other heavy metals, maybe the authors could be explain the reason?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1239
|
https://f1000research.com/articles/7-1238/v1
|
10 Aug 18
|
{
"type": "Research Article",
"title": "Determinants of road traffic injury at Khulna division in Bangladesh: a cross sectional study of road traffic incidents",
"authors": [
"Rafiqul Islam",
"Mostaured Ali Khan",
"Krishna Deb Nath",
"Mosharaf Hossain",
"Golam Mustagir",
"Surasak Taneepanichskul",
"Rafiqul Islam",
"Mostaured Ali Khan",
"Krishna Deb Nath",
"Golam Mustagir",
"Surasak Taneepanichskul"
],
"abstract": "Background: Road traffic injury (RTI) is one of the major causes of death, injury and disability worldwide and most of which occur in developing countries like Bangladesh. The main objective of this study was to identify the role of various socio-demographic and economic factors regarding the knowledge and consciousness about RTI at Khulna division in Bangladesh. Methods: Primary data were collected from 200 respondents in Khulna Medical College Hospital and Satkhira Sadar Hospital and several private clinics, generated by interviewing people who had experienced a traffic accident in Khulna division, Bangladesh. The Chi-square test and logistic regression model were utilized in this study to analyze the data. Results: The results show that there was a significant association between education (primary to higher secondary school: OR = 3.584, 95% CI = 0.907-14.155; higher educated: OR = 24.070, 95% CI = 4.860-119.206); occupation (farmer and labor: OR = 0.528,95% CI = 0.208-1.340; others: OR = 0.263, 95% CI = 0.097-0.713); if they were driving a motorcycle (OR = 4.137, 95% CI = 1.229-13.932); proper treatment (OR = 4.690, 95% CI = 1.736-12.673); consciousness about the RTI (OR = 18.394, 95% CI = 6.381-53.025); if they were an unskilled driver (OR = 8.169, 95% CI = 0.96-16.51), unfit vehicles (OR = 3.696, 95% CI = 1.032-13.234), if they were breaking traffic rules (OR = 6.918, 95% CI = 2.237-21.397), faulty road and traffic management (OR = 3.037, 95% CI = 1.125-8.196) with having knowledge about traffic rules in Khulna division, Bangladesh. Conclusion: According to the results of the study, by increasing knowledge and awareness about traffic rules among people through education and awareness programs, imposing strict traffic rules, not giving licenses to unskilled drivers, not allowing unfit vehicles on the road, reconstruction and proper road management RTI’s can be reduced.",
"keywords": [
"Road Traffic Injury (RTI)",
"Knowledge and Awareness",
"Traffic Rules",
"Socio-demographic and economic characteristics",
"Bangladesh."
],
"content": "Introduction\n\nRoad traffic Injury (RTI) is one of the leading causes of deaths, injuries and disabilities worldwide, for both developed and developing countries. Every year about 1.25 million of people die worldwide due to RTI’s1, and a high burden of traffic fatalities and injuries occur in low and middle-income countries (LMICs); this burden is enhanced due to rapid urbanization and motorization2. Road traffic accident deaths are projected to increase to 2.1 million in 2030, mainly due to the increase in the use of motor vehicles related to economic growth in low and middle-income countries3. Bangladesh is a developing country situated in South Asia and its located between 20°34' to 26°38' north latitude and 88°01' to 92°42' east longitude, with an area of 1,47,570 sq.km. with a population of 162.9 million and density of 1251.5 people per sq.km.4. Presently the total length of roads in Bangladesh is 21,125.082 km5. In Bangladesh, road traffic accidents, injuries and fatalities are an area of great concern. According to the Bangladesh Road Transport Authority, the number of death stood at 2376 and injuries at 1958 as of 2015 in Bangladesh6. Khulna is an industrial and divisional city of Bangladesh, with an area of 45.65 km2. The total number of vehicles running in Khulna city is greater than 20990, including about 13360 non-motorized and 7630 motorized vehicles as of 20057.\n\nThe World Health Organization (WHO), has reported on RTIs that “Approximately 1.3 million people die each year on the world's roads and between 20 and 50 million sustain non-fatal injuries”8. Developing countries carry the greatest share of the burden9. Reviewing literature across different countries, it shows that people aged 15–49 years are more vulnerable to road traffic deaths10,11. Men are involved in a greater proportion of road traffic accidents and fatalities in comparison to women11–14). Motorcycles are the most common vehicles to be associated with RTIs. According to Nantulya et al.15 buses, trucks, pedestrians and passengers have the highest burden of morbidity and mortality in RTIs. For Asian countries, income, road design and management, and accidents involving vehicles are also important predictors of RTIs16. Different studies identified various reasons behind RTIs like excessive speed of the vehicles, inexperienced drivers, reckless driving, violation of traffic rules and signals etc.17–19. A study from the Accident Research Center (ARC) of Bangladesh University of Engineering and Technology found that the death rate of road accidents in Bangladesh is much higher, about sixty deaths per 10,000 vehicles per every year, as compared with rates of two in the USA20.\n\nRTI’s are the 2nd most common cause of injury and deaths in Bangladesh21 and the road traffic accident situation in Khulna city as well as the rest of Bangladesh is a vital issue, and the loss of lives and damage of valuable assets are expected to continue if proper measures are not adopted accordingly. Almost 1.8% to 2.2% of gross domestic product (GDP) is lost in road accidents in this country22, which itself demonstrates the severity both in terms of deaths and injuries. So, extensive research and investigation is needed urgently to improve the RTI situation.\n\nTherefore, the main purpose of this study is to find out the socio-demographic differentials and socio-economic factors related to RTI, as well as knowledge and awareness about RTI, and to recommend suggestions regarding study results.\n\n\nMethods\n\nIn this study we performed a cross-sectional study of road traffic incidents.\n\nPrimary data were collected from orthopedics, neurosurgery and general wards of Khulna Medical College Hospital, Satkhira Sadar Hospital and several private clinics from Khulna and Satkhira district using purposive sampling. Socio-economic and demographic, injury information, data related to treatment and cost, effect on family and information related to knowledge and awareness were collected by questionnaires (Supplementary File 1 and Supplementary File 2) with face-to-face interviews from 200 respondents with a recent RTI. The inclusion and exclusion criteria applied included all respondents with a recent RTI in Khulna Division at the time of interview. The data was collected during January and February, 2017.\n\nTo analyze the data, SPSS windows version 23.0 was used. Cross tables were used to study the association of what was known about traffic rules by the respondents with their background characteristics. χ2-test was used to test the significance of the association. Moreover, to identify the determinants of RTI of the respondents, a logistic regression model was fitted. Here, knowledge of traffic rules is treated as the dependent variable which is addressed as follows:\n\n\n\nAge, gender, education, occupation, religion, monthly income, family member, earning members, place of road traffic injuries, accident by motor cycle, bicycle, car, bus, truck, proper treatment, position during RTIs, effect on family, financial effect type, if treatment cost is burden, reasons of accident, consciousness about RTIs, knowledge of traffic laws from television, radio, newspaper, appropriate application of traffic rules, the government rules to reduce RTIs is adequate and the Non-government rules for reducing RTIs are proper were treated as explanatory variables.\n\nTo test out the validity of the logistic regression analysis over the population, the cross validity prediction power (CVPP), ρcv2 , was applied. The mathematical formula for CVPP is\n\n\n\nWhere, n is the number of classes, k is the number of repressors in the fitted model and the cross-validated R is the correlation between observed and predicted values of the dependent variables23. The shrinkage (α) of the model is the positive value of (ρcv2 -R2); where ρcv2 is CVPP and R2 is the coefficient of determination of the model. Furthermore, the stability of R2 of the model is (1-α). The information of shrinkage coefficients is presented at the bottom of the respective tables. It is noted that this technique is also used as model validation technique24–27.\n\n\nResults\n\nThe results of association between knowledge about traffic rules among the selected socio-demographic and economic characteristics of respondents in Bangladesh are presented in Table 1 and Table 2. In this study, 58% of the respondent had knowledge about traffic rules. Most of the victims were aged 15–44 years (65%), and most (58%) of the respondents had prior knowledge on traffic rules, of which 6.9%, 73.3% and 19.8% were 0–14 years, 15–44 years and 45< years age groups respectively. Males (87%) are at higher risk of RTI, however, of those with prior knowledge of traffic rules 92.2% were male. Among all the respondents, 67% and 33% live in rural and urban areas, respectively, where 58.6% and 41.4%, respectively, have knowledge of traffic rules. In Khulna division, 14.5% of people were illiterate and 45% and 40.5% of people had completed “primary to higher secondary school (HSC) level education and higher level of education, respectively and of which 4.3%, 37.1% and 58.6%, respectively, knew about traffic rules. It appears that knowledge of traffic rules increase with level of education. 47% of respondents belong to the occupation group job and business, of which 62% have knowledge about traffic rules. A total of 49.5% of the respondents had a monthly family income of 10001–25000 taka, termed as middle class families, of which 49.1% of the respondents had knowledge of traffic rules.\n\nNGO: non-government organization, RTIs: Road Traffic Injuries. p<0.05 is the significance level\n\nMost of the participants had RTI’s on urban roads (37.5%), followed by rural (33%) and highway roads (29.5%). 37.9% and 35.3% of respondents who had RTI’s on urban and rural roads, had known about traffic rules. We can define motorcycles as the most vulnerable vehicle based on this study. In this case, 47.4% reported a motorcycle as their RTI vehicle. 7.8% of respondents whose accident vehicle was a bicycle had prior knowledge of traffic rules. In the case of victim’s position during the RTI, passersby were most affected (39%) followed by passengers (34.5%). In this study area, 72% of participants received proper treatment, 81% of them had knowledge of traffic rules, and 68% claimed that they had a negative effect on family due to the RTI, especially financial 40.5%. With regards to the reasons behind RTIs, respondents who had knowledge of traffic rules said unskilled drivers (30.2%), unfit vehicles (20.7%), breaking traffic rules (32.8%) and faulty roads and road management (39. 7%). The number of participant who believed current traffic rules were not sufficient (71%) was significantly higher than those who believed the rules were sufficient (29%). 63.8% had knowledge of the current traffic rules and they felt traffic rules were not sufficient. 65.5% of the participants said government rules inadequate and 66.5% of respondents indicated about NGO roles adequate.\n\nA logistic regression analysis was applied to identify the factors which were significantly associated with knowledge of traffic rules. The results of the logistic regression analysis are presented in Table 3 and Table 4. In this study, the regression odd ratio for primary to HSC educated respondents was 3.584 (95% CI = 0.907-14.155), and for higher educated was 24.070 (95% CI = 4.860-119.206), indicated that primary to HSC level educated respondents had 3.584 times more chances, and higher educated respondents had 24.070 times more chances to know traffic rules, when compared to illiterate respondents. So it was clear that higher educated people were more likely to know traffic rules than others. In the case of occupation, the regression odds ratio for farmers and labors was 0.528 (95% CI = 0.208-1.340), and for others was 0.263 (95% CI = 0.097-0.713) times less likely to know traffic rules than the respondents who were engaged in job and business.\n\nHSC: higher secondary school, RC: Reference Category and p<0.005 is the significance level.\n\nNote: Significant at ρ<0.05 and “RC” = Reference Category and RTIs: Road Traffic Injuries, SE – standard error\n\nThe respondents injured by motorcycles had 4.137 (95% CI = 1.229-13.932) times more knowledge about traffic rules than those who were injured by trucks. Respondents who received proper treatment had a regression odds ratio of 4.690 (95% CI = 1.736-12.673) indicating that those who got proper treatment were 4.690 times more likely to know traffic rules than the respondents who had not received proper treatment. People who were conscious during their RTI had an odds ratio of 18.394 (95% CI = 6.381-53.025), which indicated that those who were conscious during their RTI were 4.690 more likely to have prior knowledge of traffic rules than who were not conscious. In the case of reasons behind RTI, unskilled driver had an odds ratio of 8.169 (95% CI = 0.96-16.51), unfit vehicles had an odds ratio of 3.696 (95% CI = 1.032-13.234), breaking traffic rules had an odds ratio of 6.918 (95% CI = 2.237-21.397), faulty roads and management had an odds ratio 3.037 (95% CI = 1.125-8.196), indicating respondents were 8.169, 3.696, 6.918 and 3.037 times more likely to know about the traffic rules than the respondents that answered was “No” respectively.\n\n\nDiscussion\n\nKnowledge about traffic rules is a very important factor in reducing RTIs17,28. According to this study, it is observed that the age group at most risk of being involved in an RTI in Khulna division is 15–44 years. Similar results showed up in Ethiopia in 201429 and in Nigeria30 as well as India31. It was observed that those aged 15–44 years had more knowledge than the other age groups. Males were at relatively higher risk when compared to females, like other developing countries14,32. Similarly, deaths from RTIs was higher for males in Iran33, and in India34, and knowledge of traffic rules was higher in the male population. In Khulna, the majority of victims are from the rural areas, this is similar to the findings of Mishra et al.35, with an education level of “primary to HSC level”. Most of the individuals educated to a higher level were familiar with traffic rules. Education can play a positive role in preventing RTIs. In this area, the majority of the respondents had jobs or businesses, and had good knowledge about traffic rules compared to laborers, farmers etc. Middle-income individuals were termed as middle class families. A number of victims were from middle class families. Among these respondents, victims experienced RTIs on the urban and rural roads. We found motorcycles to be the most vulnerable vehicle, a result is similar to those found in Thailand in 200936 and also in Nigeria30,37 and many other studies31,38, where the majority had no knowledge of traffic rules. In the case of victim’s positioning at RTIs, passersby were affected most39 along with passengers. A study in India showed similar findings31,34,38. In this study area, the majority of participants got proper treatment and had knowledge about traffic rules. RTIs had an adverse effect on families, mostly financial, as victims take treatment cost due to RTI as a burden to them. Respondents identified several reasons behind RTIs; unskilled drivers40, unfit vehicles, breaking traffic rules and faulty roads & management which shows similarities with the results from Iran41 and other developing countries15,42. Disabilities and deaths caused by RTIs can only be addressed with a change in attitude43. Most of the participants think traffic rules were not sufficient and the Government’s steps were not enough to reduce RTIs. The majority of respondents indicated about the role of NGO’s, similarly to Mohan & Roberts, that to reduce RTIs government and private partnership is needed44. Further intervention studies are needed to put more focus on reducing RTIs.\n\n\nConclusion\n\nThis study has tried to explain the general characteristics of RTIs and their associated factors with RTIs in the Khulna division, Bangladesh. With the growing population and urbanization, a safe, properly managed and systematic transportation system is very urgent for Bangladesh to fulfill both current and future demand. Based on the study results increased emphasis on education is advised as well as increasing public awareness about RTIs. NGOs could play a role here. Awareness of RTIs through different training and awareness related programs especially in less well educated rural areas. Strict legislation must be compiled and followed. The government should not give licenses to unskilled drivers and those with unfit vehicles. Road management systems must be well planned and systematic, and all damaged roads must be repaired in time. Government and private organizations both are needed to eradicate road traffic accidents.\n\n\nEthical statement\n\nEthical approval (Number 0089) was obtained from the department of Population Science and Human Resource Development, University of Rajshahi, Rajshahi-6205, Bangladesh.\n\n\nData availability\n\nDataset 1: Khula data set 10.5256/f1000research.15330.d21293345",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis research is supported by Rachadapisek Fund for Postdoctoral Fellowship, Chulalongkorn University.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary File 1 – Study questionnaire (English).\n\nClick here to access the data.\n\nSupplementary File 2 – Study questionnaire (Bengali).\n\nClick here to access the data.\n\n\nReferences\n\nWHO: Global status report on road safety 2015. World Health Organization. 2015. Reference Source\n\nWHO: Global status report on road safety 2013: supporting a decade of action. World Health Organization; 2013. Reference Source\n\nMathers CD, Loncar D: Projections of global mortality and burden of disease from 2002 to 2030. PLoS Med. 2006; 3(11): e442. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUN: UNdata | country profile | Bangladesh. 2017. Reference Source\n\nMamun KAA: Road Transport and Highways Division, Government of the People's Republic of Bangladesh. 2017. Reference Source\n\nBRTA: Bangladesh Road Transport Authority(BRTA) | Road accident and casualties Statistic. 2016.\n\nRezaur R, editor: Road traffic accident situation in Khulna city, Bangladesh. Proceedings of the Eastern Asia Society for Transportation Studies. 2005. Reference Source\n\nWHO: Global status report on road safety: time for action. World Health Organization; 2009. Reference Source\n\nNantulya VM, Reich MR: The neglected epidemic: road traffic injuries in developing countries. BMJ. 2002; 324(7346): 1139–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBachani AM, Koradia P, Herbert HK, et al.: Road traffic injuries in Kenya: the health burden and risk factors in two districts. Traffic Inj Prev. 2012; 13 Suppl 1: 24–30. PubMed Abstract | Publisher Full Text\n\nShah SG, Khoumbati K, Soomro B: The pattern of deaths in road traffic crashes in Sindh, Pakistan. Int J Inj Contr Saf Promot. 2007; 14(4): 231–9. PubMed Abstract | Publisher Full Text\n\nDitsuwan V, Veerman LJ, Barendregt JJ, et al.: The national burden of road traffic injuries in Thailand. Popul Health Metr. 2011; 9(1): 2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarrib A, Herbst AJ, Hosegood V, et al.: Injury mortality in rural South Africa 2000-2007: rates and associated factors. Trop Med Int Health. 2011; 16(4): 439–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHyder AA, Amach OH, Garg N, et al.: Estimating the burden of road traffic injuries among children and adolescents in urban South Asia. Health policy. 2006; 77(2): 129–39. PubMed Abstract | Publisher Full Text\n\nNantulya VM, Reich MR: Equity dimensions of road traffic injuries in low- and middle-income countries. Inj Control Saf Promot. 2003; 10(1–2): 13–20. PubMed Abstract | Publisher Full Text\n\nMohan D: Status of road traffic injuries in Asia. Inj Prev. 2010; 16(Suppl 1): A19–A. Publisher Full Text\n\nÅberg L: Traffic rules and traffic safety. Safety Science. 1998; 29(3): 205–15. Publisher Full Text\n\nJayatilleke AU, Dharmaratne SD, Jayatilleke AC: Increased traffic fines and road traffic crashes in Sri Lanka. Inj Prev. 2012; 18(Suppl 1): A209. Publisher Full Text\n\nWangdi C, Gurung MS, Duba T, et al.: Burden, pattern and causes of road traffic accidents in Bhutan, 2013-2014: a police record review. Int J Inj Contr Saf Promot. 2018; 25(1): 65–69. PubMed Abstract | Publisher Full Text\n\nBiswas S: Road Traffic Injuries: an Emerging Problem in Bangladesh. Faridpur Med Coll J. 2012; 7(1): 5. Publisher Full Text\n\nYusuf HR, Akhter HH, Rahman MH, et al.: Injury-related deaths among women aged 10–50 years in Bangladesh, 1996–97. Lancet. 2000; 355(9211): 1220–4. PubMed Abstract | Publisher Full Text\n\nHoque MM, Mahmud SS, Paul S, editors.: The Cost of Road Traffic Accidents in Bangladesh.10th Pacific Regional Science Conference Organization (PRSCO) Summer Institute; 2008. Reference Source\n\nStevens JP: Applied multivariate statistics for the social sciences. Routledge. 2012. Reference Source\n\nIslam MR, Hossain MS: Some standard physical characteristics of students in Seoul: Modeling approach. American Journal of Mathematics and Statistics. 2015; 5(5): 230–7. Reference Source\n\nBeg ARA, Islam MR: Modeling and Forecasting Population Growth of Bangladesh. American Journal of Mathematics and Statistics. 2016; 6(4): 190–5. Reference Source\n\nIslam MR, Hoque MN: Mathematical modeling and projecting population of Bangladesh by age and sex from 2002 to 2031. Emerging Techniques in Applied Demography. Springer; 2015; 53–60. Publisher Full Text\n\nHossain MK, Islam MR, Khan MN, et al.: Contribution of socio-demographic factors on antenatal care in Bangladesh: Modeling approach. Public Health Research. 2015; 5(4): 95–102. Reference Source\n\nDong X, Peek-Asa C, Yang J, et al.: The association of road safety knowledge and risk behaviour with paediatric road traffic injury in Guangzhou, China. Inj Prev. 2011; 17(1): 15–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbegaz T, Berhane Y, Worku A, et al.: Road traffic deaths and injuries are under-reported in Ethiopia: a capture-recapture method. PLoS One. 2014; 9(7): e103001. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIpingbemi O: Spatial analysis and socio-economic burden of road crashes in south-western Nigeria. Int J Inj Contr Saf Promot. 2008; 15(2): 99–108. PubMed Abstract | Publisher Full Text\n\nDandona R, Kumar GA, Raj TS, et al.: Patterns of road traffic injuries in a vulnerable population in Hyderabad, India. Inj Prev. 2006; 12(3): 183–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGhaffar A, Hyder AA, Masud TI: The burden of road traffic injuries in developing countries: the 1st national injury survey of Pakistan. Public Health. 2004; 118(3): 211–7. PubMed Abstract | Publisher Full Text\n\nBahadorimonfared A, Soori H, Mehrabi Y, et al.: Trends of fatal road traffic injuries in Iran (2004-2011). PLoS One. 2013; 8(5): e65198. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHsiao M, Malhotra A, Thakur JS, et al.: Road traffic injury mortality and its mechanisms in India: nationally representative mortality survey of 1.1 million homes. BMJ Open. 2013; 3(8): e002621. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMishra B, Sinha ND, Sukhla S, et al.: Epidemiological study of road traffic accident cases from Western Nepal. Indian J Community Med. 2010; 35(1): 115–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerecki-Gisolf J, Yiengprugsawan V, Kelly M, et al.: The impact of the Thai motorcycle transition on road traffic injury: Thai Cohort Study results. PLoS One. 2015; 10(3): e0120617. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLabinjo M, Juillard C, Kobusingye OC, et al.: The burden of road traffic injuries in Nigeria: results of a population-based survey. Inj Prev. 2009; 15(3): 157–62. PubMed Abstract | Publisher Full Text\n\nDandona R, Kumar GA, Ameer MA, et al.: Incidence and burden of road traffic injuries in urban India. Inj Prev. 2008; 14(6): 354–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMontazeri A: Road-traffic-related mortality in Iran: a descriptive study. Public Health. 2004; 118(2): 110–3. PubMed Abstract | Publisher Full Text\n\nHanna CL, Hasselberg M, Laflamme L, et al.: Road traffic crash circumstances and consequences among young unlicensed drivers: a Swedish cohort study on socioeconomic disparities. BMC Public Health. 2010; 10(1): 14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhorasani-Zavareh D, Mohammadi R, Khankeh HR, et al.: The requirements and challenges in preventing of road traffic injury in Iran. A qualitative study. BMC Public Health. 2009; 9(1): 486. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStaton C, Vissoci J, Gong E, et al.: Road Traffic Injury Prevention Initiatives: A Systematic Review and Metasummary of Effectiveness in Low and Middle Income Countries. PLoS One. 2016; 11(1): e0144971. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMohan D: Road traffic injuries--a neglected pandemic. Bull World Health Organ. 2003; 81(9): 684–5. PubMed Abstract | Free Full Text\n\nMohan D, Roberts I: Global road safety and the contribution of big business. BMJ. 2001; 323(7314): 648. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIslam R, Khan MA, Nath KD, et al.: Dataset 1 in: Determinants of road traffic injury at Khulna division in Bangladesh: a cross sectional study of road traffic incidents. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15330.d212933"
}
|
[
{
"id": "37038",
"date": "15 Aug 2018",
"name": "Aminur Rahman",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nEnglish needs to be improved a lot. The major challenge is in the method section. Specially why Khulna was chosen for the study settings was not justified. The sample size, how was calculated is not described (this is one of the big issue). Logistic regression model variables should come from the binary analysis, this is not the case here.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "48533",
"date": "15 May 2019",
"name": "Davoud Khorasani-Zavareh",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThanks authors, for an interesting submission article entitled “Determinants of road traffic injury at Khulna division in Bangladesh: a cross sectional study of road traffic incidents”. The manuscript addresses the issue of Road Traffic Injuries (RTIs) in Bangladesh. Therefore, identifying the role of various socio-demographic and economic factors regarding the knowledge and consciousness about RTIs is effective in reducing this hazard type. There are a few points that seem to be of interest to the authors of this manuscript. Overall, this submission can be approved with reservations. Please see the following comments:\nThe title of the article needs to be modified to: \"Determinants of road traffic injures at Khulna division in Bangladesh: a cross sectional study\". In terms of writing, it requires some editing. For example, please be correct RTI’s, and NGO’s. Capitalize each word in Gross Domestic Product (GDP) and the Cross Validity Prediction Power (CVPP). In the abstract, please rewrite the conclusion. In table 2, please correct d.f., rti. In the discussion, \"A study in India showed similar findings …\", the sentence refers to three references that needs to be edited in here. It is recommended that the authors use the 2018 report of WHO: Global status report on road safety. Please revise this sentence: \"According to the Bangladesh Road Transport Authority, the number of death stood at 2376 and injuries at 1958 as of 2015 in Bangladesh.\" Please review this sentence: \"According to Nantulya et al. buses, trucks, pedestrians and passengers have the highest burden of morbidity and mortality in RTIs\". Usually, in the introduction of the article, this kind of writing is not customary. There is no need to rely on the author name. What exactly was the reason for doing a study in Khulna city? Provide statistics regarding RTIs, from this city, if available. If possible, describe, in the study method, what is the reason for choosing these variables? Why haven’t face to face interviews with other stakeholders, such as the medical staff, been conducted to get their experiences about the determinants of traffic injuries? How is the sampling size selected? How did you find this number (n=200) in this study? This study did not mention potential confounders. Potential bias sources in the study, are not included. Please explain how missing data was addressed. Please indicate the number of participants with missing data for each variable of interest. In the results, please report other analyses that were done, if applicable. For example, note analyses of subgroups and interactions, as well as sensitivity analyses. In Table 1, why is a colon (\":\") used after gender, age, etc.? In the discussion, please summarize the main results regarding the reference to study objectives. In the discussion, please discuss the generalizability (external validity) of the study results. In the discussion section, please summarize the main findings, with a focus on the study objectives. Please give a cautious overall interpretation of results considering objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1238
|
https://f1000research.com/articles/7-1236/v1
|
10 Aug 18
|
{
"type": "Research Article",
"title": "Antenatal care and its effect on risk of pregnancy induced hypertension in Lao PDR: A case-control study",
"authors": [
"Alongkone Phengsavanh",
"Wongsa Laohasiriwong",
"Kritkantorn Suwannaphant",
"Supat Assana",
"Teerasak Phajan",
"Kongmany Chaleunvong",
"Alongkone Phengsavanh",
"Kritkantorn Suwannaphant",
"Supat Assana",
"Teerasak Phajan",
"Kongmany Chaleunvong"
],
"abstract": "Background: Pregnancy induced hypertension (PIH) is a global public health concern as a leading cause of maternal mortality. Lao PDR has a high prevalence of PIH, but little is known about its risk factors. This study aimed to identify risk factors of PIH relating to antenatal care (ANC) in Lao PDR. Methods: This hospital-based age-matched case control study was carried out between July and December 2017 in tertiary and secondary hospitals in Lao PDR. A total of 258 pregnant women (86 hypertensive and 172 normotensive pregnant women) were recruited to join the study based specific inclusion criteria. For each case, two consecutive controls were included in the study with matched maternal age. Data was collected using a structured questionnaire interview to identify the risk factors of PIH relating to ANC. The association between the independent variables and PIH was assessed through bivariable and conditional multiple logistic regression analyses. Results: Mothers with PIH had inadequate ANC (defined as <4 times) (adj. OR= 10.23 , 95%CI: 3.67 – 28.49, p<0.001), excessive maternal weight gain during pregnancy (>13kg) ( adj. OR=7.35, 95%CI: 3.06 -17.69, p<0.001), had a history of abortion (adj. OR=3.54, 95%CI: 1.30-9.59, p=0.013), and had received inadequate information about PIH (adj OR= 2.58 , 95%CI: 1.03 – 6.46 , p=0.043). Conclusion: Inadequate ANC and maternal factors were major risk factors of PIH in Lao PR. National PIH guidelines for effective counseling, ANC and treatment should be promptly developed and implemented at all levels in order to improve pregnancy outcomes.",
"keywords": [
"Pregnancy Induced Hypertension",
"Antenatal Care",
"Risk Factors"
],
"content": "Introduction\n\nPregnancy induced hypertension (PIH) is a major reproductive health concern, complicating 2–3% of pregnancies1 and has an incidence of 6–8% for all pregnancies2. PIH, including preeclampsia and eclampsia, was the second leading cause of maternal mortality and morbidity, especially in developing countries3. Globally, PIH was responsible for 16% of maternal deaths4.\n\nThe complications of PIH are severe in developing countries5,6. Compared to postpartum hemorrhage and sepsis, PIH is quite difficult to prevent due to late presentation of symptoms7–9. The causes of PIH are still unknown and the mechanism is yet to be elucidated10. Maternal mortality in Lao PDR still remains the highest in Southeast Asia. The direct causes of maternal mortality in Lao PDR were postpartum hemorrhage, PIH, obstructed labor and sepsis, of which PIH was the second leading cause of maternal mortality11.\n\nAntenatal care (ANC) is a care of women during pregnancy by skilled health care providers. The components of ANC include: early high risk screening, prevention and care of pregnancy-related complications, including PIH, and provision of health education and health promotion12. PIH can be detected by routine screening of blood pressure and presence of proteinuria during ANC12,13. Adequacy of ANC is very useful in the early detection of PIH screening. Focussed ANC is recommended by the World Health Organization, who show using evidence-based intervention that there are 4 critical times for ANC during pregnancy14. Therefore, in the present study ≥4 times of ANC was defined as adequate or good ANC.\n\nCurrent PIH risk factors and preventive strategies are still questionable. The prevention and management of PIH are unclear due to insufficient knowledge concerning influencing factors, screening methods and preventive strategies. There is limited research on PIH in Lao PDR. Consequently, this study was conducted to identify risk factors of PIH in Lao PDR as relating to ANC.\n\n\nMethods\n\nPostpartum women who had delivered a baby between July and December 2017 in eight hospitals in Vientiane capital were included in this study. Four tertiary hospitals: The Mother and Child, Mahosot, Mitaphab and Sethathirath hospitals. Four provincial secondary care hospitals: Oudomxay, Xiengkhouang, Luangnamtha and Sekong hospitals.\n\nPrimigravida was considered as an exposure of PIH, however, there was no exact data available in Lao PDR for the proportions of primigravida among the subjects in this study. Therefore, the sample size was computed by using proportions of primigravida among cases and controls obtained by similar study in Thailand15. With 95% confidence level and 80% power of the study, the required minimum sample size was calculated to be 86 for cases and 172 for controls (case: control ratio of 1:2), 258 subjects in total.\n\nThe subjects were selected based on specific inclusion criteria. Cases were screened for eligibility from medical record by physicians. Single pregnant women were eligible subjects. Cases were women with PIH diagnosed by physicians. PIH was defined as a pregnant women with systolic blood pressure of ≥140 mmHg and diastolic blood pressure of ≥90 mmHg measured on two occasions 6 hours apart, accompanied by proteinuria of ≥300 mg per 24 hours, or ≥1+ on dipstick testing after 20 weeks.\n\nControls were selected based on age-matching with cases in the same hospital. Controls were normotensive pregnant women who had delivered a baby within 3 days in the same hospitals and matched ± 2 years to maternal cases. Pregnancy with abnormal fetus and hydrop fetalis were excluded.\n\nA structured questionnaire (Supplementary File 1) was used as a data collection tool for both cases and controls. The questionnaire consisted of four parts including: General information, socio-demographic characteristics, previous pregnancy history and present pregnancy history. The content of this questionnaire was reviewed by five experts for validity. From the total of five experts, four were obstetricians from central hospitals in Vientiane Lao PDR and members of the Laos Association of Obstetrics, and one was a public health specialist who had experience research reproductive health reseach and worked at the University of Health Sciences, Lao PDR, for fifteen years.\n\nIn total, 30 test subjects tested the reliability of the questionnaire at the Military Hospital in Vientiane Capital. The Cronbach’s alpha coefficient of the questionnaire was 0.87.\n\nAll cases and controls were interviewed during their hospital admission by physicians from other hospitals who were blinded to the subjects’ PIH status. Data were collected between July and December 2017.\n\nThis study is a case control study which is retrospective in nature; therefore it is subjected to information bias, including recall and investigator bias. To circumvent recall bias we limited the recruitment of the cases to mothers who recently gave birth within one week. To limit investigator bias as a result of awareness of PIH conditions, the investigators were blinded to PIH in cases or controls. Therefore the questions the investigators posed would be asked in the same way for both cases and controls.\n\nData analysis were done using STATA version 10.016. Descriptive statistic was used to describe the characteristics of cases and controls presenting frequencies, percentage, means, and standard deviations, minimum and maximum. Simple logistic regression was used to identify the association between each independent variable and PIH. The independent variables with p-value <0.25 were selected to proceed to the multivariable analysis. Since this is a matched case control design, the conditional logistic regression was administered to identify the risk factors of PIH presenting adjusted odd ratio (OR) with 95% Confidence Interval (95% CI) and p-value17.\n\nThe research proposal, questionnaire and reliability test of the questionnaire were submitted and approved by the Research Ethical Committee of Khon Kaen University, Thailand (Reference No: HE 602069) and University of Health Sciences, Vientiane, Lao PDR (Reference No: 012/17). Ethical approval from both institutions was obtained prior to the validity test and the study data collection. Patient information (demographic, socioeconomic, reproductive health and pregnancy history, ANC) and written informed consent for participation was obtained from all women, including those who took part in the validity test.\n\n\nResults\n\nA total sample of 258 postpartum women comprising 86 cases and 172 controls were included in the analysis. There was no significant differences between cases and controls regarding ethnicity, religion, educational attainment, occupation, type of health insurance, family size, number of pregnancies and number of deliveries (Table 1).\n\nExcessive maternal weight gain (>13 kg) was higher among cases (65.1%) when comparing with controls (25.8%). History of abortion was higher in controls (35.5%) compared to cases (22.1%). Cases receiving adequate information about PIH was lower (18.6%) in comparison to controls (43.6%). Only 50% of cases had adequate ANC (≥ 4 times) whereas it was 93.6% among controls (Table 1).\n\nIn the multivariable analysis using conditional multiple logistic regression, the final model showed that factors significantly association with PIH were: ANC attendance at <4 times (adj. OR= 10.23, 95%CI: 3.67 – 28.49, p<0.001), excessive maternal weight gain during pregnancy (>13 kg) (adj. OR=7.35, 95%CI: 3.06 -17.69, p<0.001), history of abortion (adj. OR=3.54, 95%CI: 1.30-9.59, p=0.013), and received inadequate information about PIH (adj OR= 2.58 , 95%CI: 1.03 – 6.46 , p=0.043) (Table 2).\n\n\nDiscussion\n\nThis is first hospital-based matched case control study aiming at identifying risk factors of PIH in Lao PDR. We found that inadequate ANC had a strong association with PIH. It was found that 93.6% of controls received ≥4 times of ANC whereas only half of cases had ≥4 times ANC. This is also supported by other associated factors, including excessive weigh gain, which were found among 65.1% of cases but only 23.4% of controls, and only 18.6% of cases received adequate information about PIH whereas almost half of the controls did. Quality ANC should include physical checkup, treatment, health education, and counselling and improving health behaviors. With adequate ANC (≥ 4 times), pregnant women would be monitored and have better pregnancy outcomes and a reduction in complications. This finding supports the results of other similar studies18–20. In addition, a study in Ethiopia also identified a lack awareness on the risk of hypertension as one of a risk factors of PIH21.\n\nWe also found that a history of abortion is a protection factor for PIH, which was similar to a study in Iran and Norway, as indicated that pregnant women who had history of abortion had lower incidence of PIH22,23. In addition, some studies in the US and Norway reported that a history of abortion was a protective factor for PIH24,25.\n\nOther factors that could have been risk factors for PIH such as gravida, pre-pregnancy body mass index and other socioeconomic factors did not show any association with PIH in this study.\n\nThere were some limitations of this study since it is a case control study. However, we have minimized information bias from investigators during interview by blinding the investigators to the PIH status of the cases and controls. Therefore, the investigators asked the questions to both case and control groups similarly.\n\n\nConclusion\n\nInadequate ANC is a major risk factor of PIH in Lao PDR, leading to poor access to information related to PIH. These put pregnant women at risks of other risk factors such as excessive maternal weight gain. Promotion of attending of ANC at least 4 times during pregnancy and developing national guidelines for PIH, including proactive strategies of antenatal screening, early detection, counseling, provision of health education, ANC and treatment, should help improve pregnancy outcomes In Lao PDR.\n\n\nData availability\n\nF1000Research Dataset 1: Raw data supporting the presented results is provided. Dataset includes socio-demographic, reproductive and medical variables, such as maternal age, ethnicity, religion, education, occupation, monthly family income. Type of health insurance, family size, number of pregnancy, history of abortion, gestational age, pre-pregnancy BMI, maternal weight in current pregnancy, number of ANC, and receiving of information are also detailed. DOI, 10.5256/f1000research.15634.d21337926",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nKhon Kaen University provided the scholarship during AP’s PhD.\n\n\nAcknowledgement\n\nWe would like to express sincere thanks and deep appreciation to all subjects, doctors, nurses, directors in participating hospitals.\n\n\nSupplementary material\n\nSupplementary File 1: Data collection form in English used in this study.\n\nClick here to access the data.\n\n\nReferences\n\nBrown MA, Mangos G, Davis G, et al.: The natural history of white coat hypertension during pregnancy. BJOG. 2005; 112(5): 601–606. PubMed Abstract | Publisher Full Text\n\nO’Brien TE, Ray JG, Chan WS: Maternal body mass index and the risk of preeclampsia: a systematic overview. Epidemiology. 2003; 14(3): 368–374. PubMed Abstract | Publisher Full Text\n\nKhan KS, Wojdyla D, Say L, et al.: WHO analysis of causes of maternal death: a systematic review. Lancet. 2006; 367(9516): 1066–1074. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Antenatal Care, Report of a Technical Working Group. Geneva: WHO, 2003.\n\nIgberase GO, Ebeigbe PN: Eclampsia: ten-years of experience in a rural tertiary hospital in the Niger delta, Nigeria. J Obstet Gynaecol. 2006; 26(5): 414–417. PubMed Abstract | Publisher Full Text\n\nAdamu YM, Salihu HM, Sathiakumar N, et al.: Maternal mortality in Northern Nigeria: a population-based study. Eur J Obstet Gynecol Reprod Biol. 2003; 109(2): 153–159. PubMed Abstract | Publisher Full Text\n\nIkechebelu JI, Okoli CC: Review of eclampsia at the Nnamdi Azikiwe University teaching hospital, Nnewi (January 1996–December 2000). J Obstet Gynaecol. 2002; 22(3): 287–290. PubMed Abstract | Publisher Full Text\n\nOnuh SO, Aisien AO: Maternal and fetal outcome in eclamptic patients in Benin City, Nigeria. J Obstet Gynaecol. 2004; 24(7): 765–768. PubMed Abstract | Publisher Full Text\n\nOnakewhor JU, Gharoro EP: Changing trends in maternal mortality in a developing country. Niger J Clin Pract. 2008; 11(2): 111–120. PubMed Abstract\n\nDuley L: Pre-eclampsia and the hypertensive disorders of pregnancy. Br Med Bull. 2003; 67(1): 161–176. PubMed Abstract | Publisher Full Text\n\nMinistry of Health: Report on Lao PDR Maternal Death Review 2011–2013. Ministry of Health, Lao PDR. 2014.\n\nLincetto O, Mothebesoane-Anoh S, Gomez P, et al.: Antenatal care. World Health Organization Geneva; 2006; 51–62. Reference Source\n\nWrobel MJ, Figge JJ Jr, Izzo JL: Hypertension in diverse populations: a New York State Medicaid clinical guidance document. J Am Soc Hypertens. 2011; 5(4): 208–229. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: WHO antenatal care randomized trial: manual for the implementation of the new model. World health Organization, Geneva. 2002. Reference Source\n\nLuealon P, Phupong V: Risk factors of preeclampsia in Thai women. J Med Assoc Thai. 2010; 93(6): 661–6. PubMed Abstract\n\nStata Corp: Stata statistical software: Release 10. College Station, Texas. 2007.\n\nDirek L: Data Analysis by STATA program. Chulalongkorn University Press (CUP). Bangkok, Thailand. 2011.\n\nMacdonald-Wallis C, Tilling K, Fraser A, et al.: Gestational weight gain as a risk factor for hypertensive disorders of pregnancy. Am J Obstet Gynecol. 2013; 209(4): 327.e1–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChasan-Taber L, Silveira M, Waring ME, et al.: Gestational Weight Gain, Body Mass Index, and Risk of Hypertensive Disorders of Pregnancy in a Predominantly Puerto Rican Population. Matern Child Health J. 2016; 20(9): 1804–1813. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRuhstaller KE, Bastek JA, Thomas A, et al.: Durnwald. The Effect of Early Excessive Weight Gain on the Development of Hypertension in Pregnancy. Amer J Perinatol. 2016; 33(12): 1205–1210. PubMed Abstract | Publisher Full Text\n\nAyele G, Lemma S, Agedew E: Factors Associated with Hypertension during Pregnancy in Derashie Woreda South Ethiopia, Case Control. Qual Prim Care. 2016; 24(5): 207–213. Reference Source\n\nTrogstad L, Magnus P, Moffett A, et al.: The effect of recurrent miscarriage and infertility on the risk of pre-eclampsia. BJOG. 2009; 116(1): 108–13. PubMed Abstract | Publisher Full Text\n\nSepidarkish M, Almasi-Hashiani A, Maroufizadeh S: Association between previous spontaneous abortion and pre-eclampsia during a subsequent pregnancy. Int J Gynaecol Obstet. 2017; 136(1): 83–86. PubMed Abstract | Publisher Full Text\n\nTrogstad L, Magnus P, Skjærven R, et al.: Previous abortions and risk of pre-eclampsia. Int J Epidemiol. 2008; 37(6): 1333–1340. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEras JL, Saftlas AF, Triche E, et al.: Abortion and its effect on risk of preeclampsia and transient hypertension. Epidemiology. 2000; 11(1): 36–43. PubMed Abstract\n\nPhengsavanh A, Laohasiriwong W, Suwannaphant K, et al.: Dataset 1 in: Antenatal care and its effect on risk of pregnancy induced hypertension in Lao PDR: A case-control study. F1000Research. 2018. http://www.doi.org/10.5256/f1000research.15634.d213379"
}
|
[
{
"id": "37478",
"date": "07 Sep 2018",
"name": "Ounjai Kor‐anantakul",
"expertise": [
"Reviewer Expertise Maternal Fetal Medicine"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe heading of the article should be“The risk factors of pregnancy induced hypertension in Lao PDR: A case-control study” because there were other three risks factors in the article that significantly effect on PIH (as shown in the Table 2. Not only the numbers of antenatal care. There are some mistakes in the discussion part: \"We also found that a history of abortion is a protection factor for PIH, which was similar to a study in Iran and Norway\"\n\nRef 22 - From Norway: The effect of recurrent miscarriage and Infertility on the risk of pre eclampsia. Trogstad L1, Magnus P, Moffett A, Stoltenberg C.\n\nRESULTS: An increased risk of pre-eclampsia, although not statistically significant, was found for women with recurrent miscarriages (adjusted OR 1.51, 95% CI 0.80-2.83). Women who had ever been treated for infertility also had increased risk (adjusted OR 1.29, 95% CI 1.05-1.60). When these two risk factors were combined, the adjusted odd ratio for pre-eclampsia was 2.40 (95% CI 1.11-5.18) Ref 23 -Sepidarkish M, Almasi-Hashiani A, Maroufizadeh S: Association between previous spontaneous abortion and pre-eclampsia during a subsequent pregnancy. Int J Gynaecol Obstet. 2017; 136(1): 83–86.\n\nResults:\n\nIn total, 5170 patients were interviewed and 252 had experienced pre-eclampsia. The number of previous spontaneous abortions was found to be associated with pre-eclampsia, and a higher number of previous spontaneous abortions was associated with increased odds of patients having experienced pre-eclampsia (adjusted odds ratio 1.28, 95% confidence interval 1.03-1.59; P=0.025).\nConclusion:\n\nA history of spontaneous abortion was associated with increased odds of pre-eclampsia during a subsequent pregnancy.Both studies showed that the history of abortion increased odds ratio of PIH. In the discussion part should add some explanations in each factor\nThe inadequate ANC numbers – how to explain for the increase PIH such as an early ANC can give some medication for prophylaxis (calcium supplementation in high risk pregnant women), etc. The excessive weight gained – how to clarify the pathophysiology or from an inadequate nutrition? (or may correlated with other medical diseases such as GDM). Received adequate information about PIH – please give more detail about the information that can help to protect the event of early detection for PIH.\n\nThe result should include the delivery method, length of hospital stays (which will show the indirect way of other complications that occurred to the mother) and newborns’ outcome.\nWere there any differences outcome between the two groups that had the significant risk factors?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "38566",
"date": "04 Oct 2018",
"name": "Vitaya Titapant",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title of this study is misleading to focus on effect of number of antenatal care (ANC) on the other risk factors for developing of Pregnancy induced hypertension (PIH). In fact, number of ANC in this study was also just one risk factor. No results in this study that showed the correlation of ANC and other risk factors. The title of this study should be changed to Risk factors of Pregnancy Induced Hypertension in Lao PDR as in the Questionnaire. More risk factors such as new partner in multiparous pregnant women, Several medical conditions (Chronic hypertension, Diabetes Mellitus, Renal disease), Pregnancy with increased placental mass should be included in the questionnaire. At the present time, new 2016 WHO antenatal care model with a minimal eight contacts is used instead of the previous one with only four visits because the latter resulted in a 15% excess of perinatal deaths (some may be from PIH) compared with former one (Dowswell et al1). So, using the previous WHO ANC recommendation may give some misleading results.\n\nThe methodology of this study is technically sound and easily replicated by others. In the discussion section, please explain why gravidarum, pre-pregnancy BMI, socio-economic factors were not the risk factors of PIH in this study. Please define the difference between Received adequate information about PIH and Received Inadequate information about PIH.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1236
|
https://f1000research.com/articles/7-450/v1
|
11 Apr 18
|
{
"type": "Research Article",
"title": "Missing the point: are journals using the ideal number of decimal places?",
"authors": [
"Adrian G Barnett"
],
"abstract": "Background: The scientific literature is growing in volume and reducing in readability. Poorly presented numbers decrease readability by either fatiguing the reader with too many decimal places, or confusing the reader by not using enough decimal places, and so making it difficult to comprehend differences between numbers. There are guidelines for the ideal number of decimal places, and in this paper I examine how often percents meet these guidelines. Methods: Percents were extracted from the abstracts of research articles published in 2017 in 23 selected journals. Percents were excluded if they referred to a statistical interval, typically a 95% confidence interval. Counts and percents were calculated for the number of percents using too few or too many decimal places, and these percents were compared between journals. Results: The sample had over 43,000 percents from around 9,500 abstracts. Only 55% of the percents were presented according to the guidelines. The most common issue was using too many decimal places (33%), rather than too few (12%). There was a wide variation in presentation between journals, with the range of ideal presentation from a low of 53% (JAMA) to a high of 80% (Lancet Planetary Health). Conclusions: Many percents did not adhere to the guidelines on using decimal places. Using the recommended number of decimal places would make papers easier to read and reduce the burden on readers, and potentially improve comprehension. It should be possible to provide automated feedback to authors on which numbers could be better presented.",
"keywords": [
"decimal places",
"meta-research",
"readability",
"statistics"
],
"content": "Introduction\n\n“Everything should be made as simple as possible, but not simpler.” Albert Einstein (paraphrased).\n\nScientists read papers in order to keep up with the latest developments in their field and improve their research. However, the ever-increasing number of papers is placing greater demands on scientists’ time. In 2010 there were an estimated 75 trials and 11 systematic reviews published per day in the field of health and medicine1, and by 2012 the number of systematic reviews had more than doubled to 26 per day2. Papers have also become less readable over time, with an increase in the use of scientific jargon3.\n\nPoorly presented numbers can decrease readability and can distort or even hide important information. Statistical software packages show results to many decimal places, but this level of accuracy may be spurious, and authors may overcrowd a paper with numbers if they copy the results from software without considering what level of accuracy is appropriate. Papers have been criticised for using too many decimal places, for example, a recent study of just 27 patients that displayed odds ratios to two decimal places4. Journal impact factors have also been frequently criticised for spurious accuracy, as they are quoted to three decimal places5.\n\nAuthors may also over-simplify numbers by rounding and losing important information. For example, a review of the gender bias in funding peer review reported in a results table that 20% of applicants were female in a study of 41,727 applications6, so from these results we only know that the number of female applicants was somewhere between 8,137 and 8,554, a range of 417. To use these results in a meta-analysis it would be better to know the actual number of applicants. The large sample size in this example means that potentially useful information is lost by rounding the percent to an integer.\n\nAuthors must strike a balance between presenting numbers with too little or too much detail. The abstract and discussion are a summary of the findings, and here numbers can be rounded to make sentences easier to read. Numbers in the results section and tables can be presented with more detail, because they can be an accurate record of the data (e.g., for meta-analysis) and the reader is usually not expected to read every number in a table, especially a large table. Of course, tables can be made clearer by reducing unnecessary numbers, and so allowing the reader to easily comprehend the key information. There is a similar balance to consider when using acronyms in papers, as an overuse of acronyms can make a paper hard to understand because readers need to retrieve additional information, whereas using established acronyms can speed up reading.\n\nThere are guidelines by Cole7 for presenting numerical data, including means, standard deviations, percentages and p-values. These guidelines are part of the wider EQUATOR guidelines for “Enhancing the QUAlity and Transparency Of health Research” http://www.equator-network.org/8. Cole’s guidelines for percentages are:\n\nIntegers or one decimal place for values under 10%, e.g., 1.1%\n\nIntegers for values above 10%, e.g., 22% not 22.2%\n\nOne decimal place may be needed for values between 90% to 100% when 100% is a natural upper bound, for example the sensitivity of a test, e.g., 99.9% not 100%\n\nUse two or more decimal places only if the range of percents being compared is less than 0.1%, e.g., 50.50% versus 50.55%\n\nThere are also guidelines from journals and style guides. For example, the instructions to authors for the journal Australian Zoologist state that, “Numbers should have a reasonable and consistent number of decimal places.” The Australian style guide also recommends a consistent number of decimal places when comparing numbers, so “1.23 vs 4.56” not “1.23 vs 4.5”9. The Economist style guide recommends, “resisting the precision of more than one decimal place, and generally favouring rounding off. Beware of phoney over-precision.”10\n\nIt is not clear whether Cole’s guidelines on presenting numerical data are being adhered to, or if there is generally too little or too much rounding in published papers. An audit of 1,250 risk ratios and associated confidence intervals from the abstracts of BMJ papers between 2011 to 2013 found that one quarter of confidence intervals and an eighth of estimates could have been presented better11.\n\nThis paper examines a large sample of percents in recent abstracts for multiple journals to examine how they are being presented.\n\n\nMethods\n\nI extracted percentages from abstracts available in PubMed using the “rentrez” R package (version 1.1.0)12. Example abstracts are referred to using their PubMed ID number rather than citing the paper, and readers can find the paper’s details by putting the number into a PubMed search with the search term “[PMID]”.\n\nI searched for papers in the following journals: The BMJ, BMJ Open, Environmental Health Perspectives, F1000Research, JAMA, The Lancet, The Medical Journal of Australia, Nature, NEJM, PLOS ONE and PLOS Medicine. These journals were selected to give a range of journals that publish articles in health and medicine, including some high profile journals and some large open access journals. To look at recent papers, I restricted the search to 2017. To focus on research papers, I restricted the search to article types of: Journal Article, Clinical Trial, Meta-Analysis, Review, Randomized Controlled Trial and Multicenter Study. The search returned 33,147 papers across 23 journals (searching for “The Lancet” included all journals in the Lancet stable).\n\nDespite the initial restriction on article type, the search results included non-research papers that had multiple types, e.g., a retraction of a clinical trial. Hence I excluded any papers that included an article type of: Biography, Conference, Comment, Corrected, Editorial, Erratum, Guideline, Historical, News, Lectures, Letter or Retraction.\n\nA flow diagram outlining the selection of papers is shown in Figure 1.\n\nI examined only percents because they are a widely used and important statistic, and are relatively easy to extract using text mining compared with other important statistics, such as the mean or rate ratio. I extracted all percentages from the text of the abstract by searching for all numbers suffixed with a “%”. The key steps for extracting the percents from the abstract were:\n\n1. Simplify the text by removing the “±” symbol and other symbols such as non-separating spaces\n\n2. Find all the percents\n\n3. Exclude percents that refer to statistical intervals or statistical significance, e.g, “95% confidence interval”\n\n4. Record the remaining percents as well as the number of decimal places and significant figures\n\nThe complete steps are detailed in the R code available here:https://github.com/agbarnett/decimal.places. Based on Cole’s guidelines7, I defined the ideal number of decimal places as:\n\n0 for percents between 10 and 90, and percents over 100\n\n1 for percents between 0.1 and 10, and percents between 90 and 100, and percents of exactly 0\n\n2 for percents under 0.1\n\n3 for percents under 0.01\n\n4 for percents under 0.001 but greater than 0\n\nPreferably I would have also considered a greater number of ideal decimal places when the aim was to compare a small difference in two percents. For example, 10.5% compared with 10.6% in the same sentence (PubMed ID 28949973) would both be considered as having one decimal place too many using the above guidelines, but the additional decimal place may be warranted if the small difference of 0.1% is clinically meaningful. However, accurately calculating a small difference of less than 0.1% requires all percents to be displayed using two or more decimal places. Ultimately I ignored this issue because it applied to so few abstracts.\n\nI removed percents that referred to statistical intervals (e.g., “95% CI”) as these were labels not results. I searched for common interval percents of 80%, 90%, 95% and 99%. I combined these four percents with the words: “confidence interval”, “credible interval”, “Bayesian credible interval”, “uncertainty interval”, “prediction interval”, “posterior interval” and “range”. I included versions using capital and non-capital letters, and the standard acronyms including “CI” and “PI”. I also removed references to statistical significance percents using the common percents of 1%, 5% and 10% combined with the words: “significance”, “statistical significance” and “alpha level”.\n\nI verified that the percents were correctly recorded for 50 randomly selected articles which contained 198 percents. There were no errors in the recorded percents, but there were 5 percents that were labels rather than results (e.g., “the prevalence of pretreatment NNRTI resistance was near WHO’s 10% threshold” PubMed ID 29198909), and there was an error with the ideal number of decimal places being 4 for a percent of 0% which led to a change in my ideal number of decimal places. There was also a “95% fixed kernel density estimator” which is a statistical interval and illustrates the difficulty of removing every type of statistical interval. I also checked the percentages for the abstract with the largest number of percents and the abstracts with the largest number of decimal places and significant figures. I also checked some abstracts that included percents of exactly 95% to check for any missing interval definitions. These checks led to additional definitions of intervals including the non-standard arrangements of “95% IC”, \"CI 95%\" (PubMed ID 28228447) and the typo \"uncertainly interval\" (PubMed ID 29171811).\n\nI only extracted percents that were suffixed with the percent symbol. For example, the only extracted percent for the text “5–10%” or “5 to 10%” would be 10%. Any percents written in words were also not extracted. I also did not extract numbers immediately before the word “percent” or “per cent” as I assumed that these would be rare. I ignored the sign of the percent as I was primarily interested in presentation, so for example “–10%” was extracted as 10%. Similarly “<10%” was extracted as 10%. I only used the abstracts, rather than the main text, because: 1) abstracts are freely available on PubMed for a wide range of journals, whereas the full text can only be easily mined for open access journals such as PLOS ONE, 2) the abstract is a summary of the results and so percentages should be presented according to Cole’s guidelines, whereas percents may be presented with more decimal places in the results in order to give an accurate and reusable record of the data.\n\nI calculated the difference between the observed number of decimal places and the ideal number as defined above. Because most differences were within ±1, I categorised the data into: too few, just right, and too many. I plotted the three categories by journal. I estimated confidence intervals for the percents in these three categories using a Bayesian Multinomial Dirichlet model13. The large sample size meant all confidence intervals had a width of 2% or less when using the complete sample, hence I did not present these intervals as they were not useful. The intervals are used to summarise the uncertainty for the results from journals. I did not adjust for the clustering of multiple percents within the same abstract.\n\nIn a sensitivity analysis I excluded percents that could be due to digit preferences, which were those with no decimal places that were a multiple of 10, as well as 75%. I also excluded percents between 90% and 100% because these may or may not have had a natural upper bound at 100%, and so it is difficult to automatically judge whether they should be presented with one or no decimal places. The data extraction and analyses were made using R (version 3.4.3)14. All the data and code are available here: https://github.com/agbarnett/decimal.places.\n\n\nResults\n\nThere were 43,119 percents from 9,482 abstracts. Over half the percents were from PLOS ONE (Supplementary Table 1). The median number of percents per abstract was 3 with an inter-quartile range from 2 to 6. A histogram of all percents between 0 and 100 is shown in Figure 2, this excludes the 195 percents (0.05%) that were greater than 100%. There are spikes in the histogram at multiples of 10% and at 1%, 5%, 75% and 95%, these are likely due to digit preferences where percents have been rounded to commonly used values.\n\nThe percent and number of percents meeting the guidelines are in Table 1. The recommended number of decimal places were used just over half the time. When the number of decimal places differed from the guidelines, it was more likely to be too many decimals (33%) rather than too few (12%). Only 21% of abstracts (1,947 out of 9,482) used the ideal number of decimal places for every percent. After excluding the digit preference percents, as many of these were not results, the recommended number of decimal places were used just 50% of the time and the percent of time that too many decimal places were used increased to 40%.\n\nAn example where too many decimal places were used is, “True retentions of α-tocopherol in cooked foods were as follows: boiling (77.74-242.73%), baking (85.99-212.39%), stir-frying (83.12-957.08%), deep-frying (162.48-4214.53%)” (PubMed ID 28459863).\n\nAn example where too few decimal places were used is, “263 [3%] of 8313 vs 158 [2%] of 8261” (PubMed ID 29132879). As the numerators and denominators are given, we can recalculate the two percents using the recommended one decimal place, which are 3.2% and 1.9%, respectively, a difference of 1.3%. Without the reader working out these percents, the implied difference could be smaller than 0.1% because 3% could be as little as 2.5% (rounded up to 3%) and 2% could be as large as 2.4% (rounded down to 2%).\n\nThere were abstracts where the number of decimal places varied within the same sentence, for example, “pre-2010 vs post-2010 31.69% vs 64%” (PubMed ID 29138196).\n\nSome percents which I judged as having too few decimal places, were potentially harshly judged because the sentence aimed to give general percents, for example the following sentence probably did not need the percents to one decimal place, “it is a common, chronic condition, affecting 2–3% of the population in Europe and the USA and requiring 1–3% of health-care expenditure” (PubMed ID 28460828). Some percents with too few decimal places according to Cole’s guidelines were presented using a consistent number of decimal places, for example, “we noted reductions in genotypes 6 and 11 (from 12% [95% CI 6-21%], to 3% [1-7%]” (PubMed ID 28460828); using the guidelines all the percents under 10% should have had one decimal place. Some percents with too few decimal places were correctly presented with no decimal places because the sample size was under 100, for example, “with a specimen obtained at 13 to 15 months, in 1 of 25 (4%)” (PubMed ID 26465681). I could not adjust for this correct presentation because I did not extract sample sizes.\n\nThere were large differences between some journals in the number of decimal places used (Figure 3). There is some grouping of Lancet journals, which collectively leaned towards using too few decimal places. The two journals with means closest to the ideal were Lancet Planetary Health and Nature, although the sample size for Lancet Planetary Health is just 30 (Supplementary Table 1). There was a negative correlation between using too few and too many decimal places, so journals that used too many decimals places were less likely to use too few, and vice versa.\n\nOnly two journals had specific guidelines about decimal places in their online instructions to authors (Supplementary Table 2) and these both concerned not using decimal places where the sample size was under 100 (sensible advice which I did not consider here). Some instructions to authors did encourage the use of the EQUATOR guidelines, from where Cole’s guidelines for decimal places are available.\n\n\nDiscussion\n\nNumerical results are vitally important for quantitative research papers. Presenting numbers with too many decimal places can unnecessarily tax the reader and obscure important differences, whereas too much rounding makes it hard to compare results and can make differences appear too small. Overall, I found that only around half of all percents were presented according to the guidelines for decimal places, and the most common problem was using too many decimals. The overuse of decimals may stem from a belief that more numbers reflect greater accuracy. It is also likely that most researchers are not aware of the guidelines for presenting percents and other statistics.\n\nThe guidelines are not written in stone and good arguments can be made for not using them in some circumstances, for example, using no decimal places where all the percents are just above and below 10%, or where the differences are large enough to clearly show importance (e.g., a 1% versus 9% difference in mortality instead of 1.0% versus 9.0%). Hence the “around half” estimate for imperfect presentations found here likely overstates the problem. Additionally, there are far more serious mistakes that can be made with numbers, such as using the wrong data15 or mislabelling statistics.\n\nI found large differences between journals in the number of decimal places used. These differences could be due to editorial policy and also differences in the training and experience of the journals’ author cohorts. Nature had one of the best results in terms of ideal presentation, and they published relatively few papers, which may mean they have more time to edit papers for clarity and presentation. PLOS ONE had the most amount of papers in the sample and did relatively badly compared with the guidelines, perhaps because there is no time for editors to fix issues with presenting numbers given the large volume of papers and other important tasks, for example, checking for plagiarism and undeclared competing interests.\n\nThe difference in standards between journals likely adds to the confusion for authors about how to present numbers. Greater consistency and better presentation might be improved by having an automated checking procedure similar to the statcheck program that checks for errors in statistical reporting16. This could be used to flag numbers that may need changing and could be part of an automated submission process for journals through online writing tools such as Overleaf17. Automating the process would reduce the burden on journal staff.\n\nI only examined percents, but it is likely that other statistics, such as means and risk ratios, are also being imperfectly presented. In fact, using percents may underestimate the problem of spurious accuracy because percents are almost always between –100 and 100, whereas means can take on a far wider range depending on the unit of measurement and a wider range of numbers creates more opportunity for poor display. I only examined percents because these are the easiest numbers to automatically extract from text, thanks to the “%” suffix.\n\n\nConclusions\n\nMany percents in abstracts did not adhere to the guidelines on using decimal places. A more considered use of decimal places would increase readability and potentially improve comprehension.\n\n\nData availability\n\nAll the data and code are available here: https:// github.com/agbarnett/decimal.places.\n\nArchived data and code as at time of publication: http://doi.org/10.5281/zenodo.121357418.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nAB receives funding from the Australian National Health and Medical Research Council (APP1117784).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary Table 1. Table of the percent of times journals used too few and too many decimal places according to the guidelines.\n\nClick here to access the data.\n\nSupplementary Table 2. Instructions to authors about decimal places for percents from the selected journals.\n\nClick here to access the data.\n\n\nReferences\n\nBastian H, Glasziou P, Chalmers I: Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010; 7(9): e1000326. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBastian H: Pubmed Commons. 2013. Reference Source\n\nPlavén-Sigray P, Matheson GJ, Schiffler BC, et al.: The readability of scientific texts is decreasing over time. eLife. 2017; 6: pii: e27725. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFradley MG, Viganego F, Kip K, et al.: Rates and risk of arrhythmias in cancer survivors with chemotherapy-induced cardiomyopathy compared with patients with other cardiomyopathies. Open Heart. 2017; 4(2): e000701. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBornmann L, Marx W: The journal Impact Factor and alternative metrics: A variety of bibliometric measures has been developed to supplant the Impact Factor to better assess the impact of individual research papers. EMBO Rep. 2016; 17(8): 1094–1097. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBornmann L, Mutz R, Daniel HD: Gender differences in grant peer review: A meta-analysis. J Informetr. 2007; 1(3): 226–238. Publisher Full Text\n\nCole TJ: Too many digits: the presentation of numerical data. Arch Dis Child. 2015; 100(7): 608–609. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltman DG, Simera I: A history of the evolution of guidelines for reporting medical research: the long road to the EQUATOR network. J R Soc Med. 2016; 109(2): 67–77. PubMed Abstract | Publisher Full Text\n\nStyle Manual: For Authors, Editors and Printers. John Wiley and Sons Australia, 6th edition, 2002. Reference Source\n\nThe Economist: The Economist Style Guide: 9th Edition. Bloomberg Press, 2005, ISBN 1861979169. Reference Source\n\nCole TJ: Setting number of decimal places for reporting risk ratios: rule of four. BMJ. 2015; 350: h1845. PubMed Abstract | Publisher Full Text\n\nWinter D: rentrez: Entrez in R. R package version 1.1.0. 2017. Reference Source\n\nGelman A, Carlin JB, Stern HS, et al.: Bayesian Data Analysis. Chapman & Hall/CRC Texts in Statistical Science. Taylor & Francis, 2nd edition, 2003, ISBN 9781420057294. Reference Source\n\nR Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2017. Reference Source\n\nBorwein J, Bailey DH: The Reinhart-Rogoff error – or how not to Excel at economics. The Conversation. 2013. Reference Source\n\nBaker M: Stat-checking software stirs up psychology. Nature. 2016; 540(7631): 151–152. PubMed Abstract | Publisher Full Text\n\nPerkel JM: Scientific writing: the online cooperative. Nature. 2014; 514(7520): 127–128. PubMed Abstract | Publisher Full Text\n\nBarnett A: agbarnett/decimal.places: First release of decimal place code and data (Version v1.0). Zenodo. 2018. Data Source"
}
|
[
{
"id": "33072",
"date": "23 Apr 2018",
"name": "Tim J. Cole",
"expertise": [
"Reviewer Expertise Medical statistics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBarnett's article tests the adherence of papers in 23 journals to Cole's guidelines on the numerical presentation of percentages. As the author of the guidelines I acknowledge a competing interest. The paper is well designed, executed and reported, and it has a clear message - many papers give percentages to excessive precision. I have a few minor suggestions for improving it.\n1. The term \"accuracy\" used several times in the Introduction and elsewhere should more correctly be \"precision\".\n2. Did the algorithm for identifying percents allow for the possibility of the % appearing preceded by a space?\n3. Looking at the R code I note the long lists of alternative spellings. First converting all text to lower case would have substantially simplified this process.\n4. The spikes in Figure 1 are interesting. The author suggests they represent digit preference, which may be true, but I would have thought it worth formally checking some of them to see.\n5. The paper starts with an apt quotation from Einstein. I wanted to draw the author’s attention to another apt quotation, from Gauss, which appears as a response to my original Archives of Disease in Childhood paper here. It is \"Lack of mathematical education does not become more evident than by excessive precision in numerical calculation.\" Carl Friedrich Gauss (1777-1855).\n\nTim Cole\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3781",
"date": "23 Jul 2018",
"name": "Adrian Barnett",
"role": "Author Response",
"response": "Thanks for your useful comments on the paper. My answers to your numbered questions are: Agreed and changed in the Introduction and Discussion. Yes, a preceding space was allowed. I have now added this to the Methods. I kept the text in upper case because I originally also wanted to find sentences and used the pattern of: full-stop then space then upper-case letter, to define a new sentence. Changing to lower case included some non-sentence breaks for odd character strings such as compounds or labels from genomics. The idea was to compare percentages in the same sentence, but I did not use this in the paper. This comment was based on my informal checks made whilst compiling the data. As a more formal check I examined fifty randomly selected percents of exactly 50 and found that only 20 were actual results, with the rest being rounded results or thresholds. I’ve added some additional text to the paper on this. Thank you for this quote. Excessive decimal places have been a problem for some time and this paper will hopefully help to draw attention to it again."
}
]
},
{
"id": "33075",
"date": "26 Jun 2018",
"name": "David J. Winter",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article, Barnett examines the precision with which percentages are reported in the abstracts of journals indexed in PubMed. The article does a good job of explaining the motivation and design of the research and putting its results into context. The data presented in the manuscript supports the major finding of the work (that percentages are frequently presented with more apparent-precision than is reasonable).\n\nBecause my own expertise relates more to the code used for this analysis than the proper presentation of statistics my review focuses on the scripts use to perform the analysis. The linked code repository is currently lacking a number of files required to perform the analysis. I am sure this is simply and oversight, but I cannot recommend this paper for acceptance until these files are included. In addition, the overall design of the analysis code, which contains hundreds of lines of R code without any user-defined functions and no \"high-level\" documentation of different sections of the scripts, makes it difficult to sport errors or apply this approach to other datasets. I detail these concerns below.\nMajor issues with code\nThe files 'decimalplaces.R' and 'MultinomialCIsBayes.R' are required to perform these analyses but not included in the repository.\nMany of the datasets produced by this analysis are only saved in binary formats (excel spreadsheets or serialized R objects). Key data underlying the results presented in the paper should be made available in a plain-text format (e.g. csv for the tables).\nThe README file for the github repository should contain: (a) brief description of the most important files included in the repository and their purpose (e.g. make.data.R, decimal.places.stats.Rmd) (b) A list of R packages required to run this code (c) Instructions on how a user can run these analyses (perhaps pointing out any steps that take a particularly long time, and how they might be skipped) (d) Links to this paper and the Zenodo archive associated with this repository.\nThe file \"make.data.R\" would benefit from some high-level documentation explaining the motivations for each section of the file, and precisely what is being created by each code block. At present, the comments and the top of each block are very brief and difficult to interpret. For example, the comment # get meta data (loop through smaller numbers)\" (appearing on line 101) has no obvious meaning to me.\nMinor issues with code\nI suggest you avoid using the aliases T and F for TRUE and FALSE. These are not reserved variables, and accidentally setting them to some other value can lead to unexpected results.\nThe code often uses an idiom like\n\nalready.saved = T if(already.saved==F) {\n\n#do something }\nI am not sure what purpose this serves, as the value is always hard-coded to TRUE. If the intend is to save users from re-running the data-fetching step it may make more sense to save the data to csv, then check for the existence of the data file before fetching fresh data.\nThe variable \"journals2\" is defined by never used.\n\nLine 146 has \"for (a in 1:9000)\", hard-coding the number of articles to 9000 (slightly fewer than the number included in the study).\nA lot of typing could be saved from variables like \"ci.phrases\" if abstracts where always converted to lower case before matching strings. This would both increase the readability of the code and make it less likely that typographical errors slip into the code.\nMinor issues with the manuscript.\nThe word \"accuracy\" is often used to describe the representation of percentages. I think \"precision\" is a better term for what is being described.\nIn the introduction, the sentence starting \"Papers have been criticised for using too many decimal places...\" does not cite any criticism.\nRentrez now has a paper to cite Winter, D. J. (2017) rentrez: an R package for the NCBI eUtils API1.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3780",
"date": "23 Jul 2018",
"name": "Adrian Barnett",
"role": "Author Response",
"response": "Thanks for your useful comments on the code and documentation thereof. The files 'decimalplaces.R' and 'MultinomialCIsBayes.R' have now been added to github. All the key RData files have now also been provided as CSV or tab-delimited files. The README file in github is more detailed. I’ve added sections to the make.data.R code and improved the comments. I've used TRUE/FALSE in place of T/F As suggested, I’ve now checked for the existence of the RData files. The variable \"journals2\" was used as a time saver. The many data were created for two sets of journals (‘journals’ and ‘journals2’). The loop up to 9000 was added because there were sometimes breaks in the online connection to pubmed. I’ve used the full loop and added a warning in the code about this issue. I kept the text in upper case because I originally also wanted to find sentences and used the pattern of: full-stop then space then upper-case letter, to define a new sentence. Changing to lower case included some non-sentence breaks for odd character strings such as compounds or labels from genomics. The idea was to compare percentages in the same sentence, but I did not use this in the paper."
}
]
}
] | 1
|
https://f1000research.com/articles/7-450
|
https://f1000research.com/articles/7-314/v1
|
13 Mar 18
|
{
"type": "Research Article",
"title": "Haematology of N’Dama and West African Short Horn cattle herds under natural Trypanosoma vivax challenge in Ghana",
"authors": [
"Ebenezer Yaw Ganyo",
"Johnson N Boampong",
"Daniel K Masiga",
"Jandouwe Villinger",
"Paa Kobina Turkson",
"Ebenezer Yaw Ganyo",
"Johnson N Boampong",
"Daniel K Masiga",
"Jandouwe Villinger"
],
"abstract": "Background: Animal trypanosomosis is a major cause of economic loss in livestock production in Africa. A suggested control measure is to use breeds with traits of trypanotolerance. The study examines the effect of natural Trypanosoma vivax challenge on haematological parameters in two trypanotolerant cattle [N’Dama and West African Short Horn (WASH)] herds. Methods: T. vivax-specific primers were used to diagnose T. vivax infection in an N’Dama herd at Cape Coast in southern Ghana and a WASH herd at Chegbani in northern Ghana from May to July 2011 in a cross-sectional study. Levels of haematological parameters comprising packed cell volume (PCV), haemoglobin (Hb) concentration and total red blood cell (RBC) and white blood cell (WBC) counts; differential WBC counts (neutrophils, lymphocytes, eosinophils, monocytes and basophils); and RBC indices of mean corpuscular volume (MCV), mean corpuscular haemoglobin (MCH) and mean corpuscular haemoglobin concentration (MCHC) were determined in blood samples and then compared between infected and uninfected cattle. Results: We found that haematological indices for infected and uninfected animals in both breeds were within the normal range. However, the mean PCV values for T. vivax-infected WASH and N’Dama were lower in infected compared to uninfected animals. The difference was significant (p< 0.05) in N’Dama but not in WASH. The RBC indices were higher in infected N’Dama compared to infected WASH with a significant difference in total RBC (p < 0.05). Conclusion: We conclude from our findings that despite the presence of infection by T. vivax, N’Dama and WASH cattle maintained their haematological parameters within acceptable normal ranges, and this underscores the need for routine diagnosis and treatment so that such trypanotolerant cattle do not serve as potential reservoirs of trypanosome parasites.",
"keywords": [
"Haematology",
"cattle",
"trypanotolerance",
"trypanosomosis",
"N’Dama",
"West African Short Horn"
],
"content": "Introduction\n\nAnimal trypanosomosis, caused by trypanosomes mainly transmitted by tsetse flies results in annual economic losses in Africa in the range of US$ 1.0 - 1.2 billion in cattle production alone, and more than US$ 4.75 billion in terms of agricultural Gross Domestic Product (Enyaru et al., 2010). Among species of trypanosomes that cause nagana, Trypanosoma vivax is the predominant species in Ghana (Adam et al., 2012; Mahama et al., 2004; Turkson, 1993).\n\nThe usual consequence of trypanosome infection is anaemia, which is often accompanied by poor growth, weight loss, low milk yield, infertility, abortion and paralysis (Dagnachew et al., 2015; OIE, 2013; Steverding, 2008; Trail et al., 1984). Death may result within a few weeks to several months after infection. Past and current control methods are limited, and it is unlikely that a vaccine will become available in the foreseeable future (Vale, 2009).\n\nTrypanotolerant breeds, although equally susceptible to initial infection by trypanosomes, possess the ability to survive, reproduce and remain productive in areas of high tsetse challenge without the need for the use of chemicals to control the vector or drugs to control the parasite (Dayo et al., 2009; Maganga et al., 2017; Rege et al., 1994; Yaro et al., 2016), where other breeds rapidly succumb to the disease (Murray & Dexter, 1988). The trypanotolerant trait is generally attributed to the taurine breeds of cattle in West and Central Africa, namely, the N'Dama and the West African Short Horn (WASH) (Roelants, 1986). Similar observations have been made for the Orma Boran X Maasai Zebu (Orma Zebu) crossbred cattle in East Africa (Maichomo et al., 2005; Mwangi et al., 1998a; Mwangi et al., 1998b). Studies have shown that the basis of this trait was associated with the capacity of these animals to develop less severe anaemia in the face of infection (Murray et al., 1982; Murray & Dexter, 1988).\n\nWe previously reported natural T. vivax challenge in N’Dama and WASH cattle herds in Ghana using a sensitive PCR approach (Ganyo, 2014). The current study being reported here examines the effect of natural T. vivax challenge on haematological parameters in these trypanotolerant cattle herds.\n\n\nMethods\n\nFifty-five animals each were sampled from an N’Dama herd at Cape Coast in southern Ghana and a WASH herd at Chegbani in northern Ghana from May to July 2011 in a cross-sectional study. The herds were chosen purposively since these were herds with the breeds of interest. From each animal, about 4 ml of blood was collected from the jugular vein using standard operating procedure that required no sedation and transferred into vacutainer tubes containing EDTA as anticoagulant. The vacutainer tubes were then placed in a coolbox containing ice packs for transportation to the laboratory, where they were refrigerated the same day for subsequent analysis.\n\nDNA was extracted from 200 μl of blood of each animal according to the protocol of Bruford et al. (1998) following red blood cell (RBC) lysis (Biéler et al., 2012). The procedure for DNA amplification and diagnosis of T. vivax infection has been described elsewhere (Ganyo, 2014). Briefly, amplifications were carried out targeting the 170-base pairs (bp) satellite DNA monomer of T. vivax. The PCR was carried out in a total volume of 20 µl containing 10 pmoles of each primer i.e. TVW_A (5’-GTGCTCCATGTGCCACGTTG-3’) and TVW_B (5’-CATATGGTCTGGGAGCGGGT-3’) (Masiga et al., 1996), 4.0 μl 5X HF Buffer (Finnzymes), 10mM dNTPs, 1 unit Taq polymerase (Finnzymes) and 1 µl of DNA template. Cycling conditions for the PCR were accomplished in a 96-well thermocycler (PTC-100 Programmable Thermal Controller, MJ Research, Gaithersburg) as follows: initial denaturation at 98°C for 30 sec, followed by 35 cycles of denaturation at 98°C for 10 sec; annealing at 68°C for 30 sec, primer elongation at 72°C for 15 sec, and a final extension at 72°C for 7 min. PCR products were mixed with loading dye and samples were loaded alongside a molecular weight DNA marker as well as known positives and negatives into 1.5% agarose gel, stained with 50 mg/µl ethidium bromide. Electrophoresis was set at 75 volts for 1 hr 20 min, followed by visualization of the DNA under UV-illumination.\n\nPacked cell volume (PCV) was determined by the microhaematocrit centrifugation technique while haemoglobin (Hb) concentration was measured spectrophotometrically by the cyanmethaemoglobin method (Jain, 1986). Total RBC and white blood cell (WBC) counts were done manually using a haemocytometer, according to the procedure outlined in Merck Veterinary Manual (Merck Veterinary Manual, 1986). Differential WBC counts were obtained from air dried thin blood smears stained with Giemsa stain according to the battlement method (Merck Veterinary Manual, 1986). RBC indices of mean corpuscular volume (MCV), mean corpuscular haemoglobin (MCH) and mean corpuscular haemoglobin concentration (MCHC) were calculated using standard formulae.\n\nData analysis was performed using the R statistical software version 2.3.7.1 (R Development Core Team, 2013). The one way analysis of variance (ANOVA) test was used to compare the means for haematological parameters in infected and uninfected cattle. Tests of significance were done at α = 0.05.\n\n\nResults\n\nSeven of the N’Dama samples (n=55) and 4 animals from the WASH samples (n=55) were positive for T. vivax infection. The mean haematological values for trypanosome-positive and negative cattle are shown in Table 1. For the N’Dama cattle, significant differences were observed in PCV (p < 0.05), total RBC count, MCV and MCH (p < 0.01) values between infected and uninfected cattle, with PCV, MCV and MCH values being significantly higher in uninfected compared to infected cattle. The other parameters were similar for both groups (Table 1). For the WASH cattle, the PCV, Hb and RBC values for uninfected cattle were higher than those for infected cattle (Table 1).\n\nn represents number of samples in each category\n\n*Indicates level of significance at 5% level (p < 0.05)\n\n**Indicates level of significance at 1% level (p < 0.01)\n\nPCV, packed cell volume; Hb, haemoglobin; RBC, red blood cells; MCV, mean corpuscular volume; MCH, mean corpuscular haemoglobin; MCHC, mean corpuscular haemoglobin concentration; WBC, white blood cells.\n\nWhen the mean haematological values for T. vivax-positive N’Dama and WASH cattle were compared (Table 2), significant differences were observed in the total RBC count (p = 0.01), MCV (p = 0.04), MCH (p = 0.02) and eosinophil values (p = 0.04). The RBC and eosinophil values were significantly (p < 0.05) higher in N’Dama compared to WASH, while the MCV, MCH and WBC values were significantly (p < 0.05) higher in WASH compared to N’Dama cattle.\n\nn represents number of samples in each category\n\n*Indicates level of significance at 5% level (p < 0.05)\n\nWASH, West African Short Horn; PCV, packed cell volume; Hb, haemoglobin; RBC, red blood cells; MCV, mean corpuscular volume; MCH, mean corpuscular haemoglobin; MCHC, mean corpuscular haemoglobin concentration; WBC, white blood cells.\n\n\nDiscussion\n\nThe mean PCVs for infected N’Dama and WASH cattle in this study were lower than those for uninfected cattle, which support other findings (Waiswa & Katunguka-Rwakishaya, 2004), and are consistent with the pathological effect of anaemia in trypanosome infections. A survey conducted in cattle in Ethiopia (Bekele & Nasir, 2011) revealed that the mean PCV of trypanosome-infected animals was significantly lower (20.8 ± 3.2 %) compared to non-infected animals (24.9 ±3.8 %). A later study in Ethiopia (Dagnachew et al., 2015) in cattle experimentally infected with T. vivax isolates also showed that the mean PCV, Hb and total RBC count were lower (p < 0.001) in all infected groups than in non-infected control animals. In Nigeria, domestic ruminants that were naturally infected with trypanosomes had significantly lower (p < 0.05) PCV and RBC counts compared to uninfected animals (Ohaeri & Eluwa, 2011). Lower herd average PCVs for trypanosome-positive cattle compared to trypanosome-negative cattle have also been reported from Zambia (Marcotty et al., 2008), Cameroon (Achukwi & Musongong, 2009) and Gabon (Cossic et al., 2017).\n\nThe mean PCV, Hb and RBC values observed in both infected and uninfected N’Dama and WASH cattle studied were within the established normal reference values (Jain, 1993). The presence of T. vivax parasites is associated with a reduction in the mean RBC values below the normal range, an indication of anaemia (Murray & Dexter, 1988; Silva et al., 1999). However, infected cattle in the present study had mean RBC values within the normal range, which could be attributed to their trypanotolerant trait. In a typical trypanotolerance phenomenon, pathogenic Trypanosoma species infection does not usually result in anaemia (Murray et al., 1982; Murray & Dexter, 1988). Our findings compare favourably with the report that the mean RBC, Hb and PCV values in natural T. vivax-infected trypanotolerant Muturu cattle in Nigeria (Mbanasor et al., 2003) were well within accepted normal values for cattle. An earlier study (Adam et al., 2012) involving trypanotolerant WASH cattle in Ghana also reported normal PCV value despite the presence of trypanosome parasites.\n\nThe importance of keeping trypanotolerant cattle as a trypanosomosis control measure in Ghana has long been recognised (Mahama et al., 2003; Turkson, 1993). However, farmers are increasingly showing a preference for trypanosusceptible Zebu cattle and its Sanga crossbreed, which have a larger body size, higher milk yield and heavier live weight (World Bank, 1992). For example, the last Ghana Livestock Census published in 1997 indicated that the WASH constituted 70% of the cattle population in Ghana (Mahama et al., 2003), while a later study gave a reduced WASH population of 47.5% (Ahunu & Boa-Amponsem, 2001).\n\nThe larger bodied trypanosusceptible cattle breeds command higher prices compared to the smaller bodied trypanotolerant cattle, but unlike trypanotolerant cattle, trypanosusceptible cattle cannot survive in areas of high tsetse densities without veterinary intervention or other tsetse control strategies (Mahama et al., 2003; Turkson, 1993).\n\nAlthough trypanotolerant breeds are equally susceptible to trypanosome infection, they possess the ability to survive, reproduce and remain productive in areas of high tsetse challenge without the need for the use of chemicals to control the vector or drugs to control the parasite (Dayo et al., 2009; Maganga et al., 2017; Rege et al., 1994; Yaro et al., 2016).\n\nAs was clearly demonstrated in this study, the infected trypanotolerant WASH and N’Dama maintained their mean RBC values in the normal range, which is an indication of good health. Further, in the present study, informal information revealed that none of the livestock keepers who keep such trypanotolerant breeds use trypanocidal drugs regularly or strategically to control the infection. Since trypanotolerant cattle could serve as potential reservoirs of trypanosome parasites, there is the need for owners of WASH and N’Dama herds in Ghana to incorporate routine diagnosis and treatment of trypanosome infections into their overall management strategy.\n\nThe mean WBC counts for the N’Dama and WASH in the present study were within the normal values reported by Jain (Jain, 1993) but were lower than those reported for T. vivax-infected and uninfected Muturu cattle in Nigeria (Mbanasor et al., 2003). The total RBC count for infected N’Dama was significantly (p < 0.05) higher than that for infected WASH suggesting that the N’Dama may be better at controlling anaemia compared to WASH.\n\n\nConclusion\n\nIn conclusion, the study found that in spite of the presence of natural T. vivax infection, the haematological parameters of N’Dama and WASH cattle herds were within acceptable normal ranges. Since such healthy cattle could serve as a potential source of infection for trypanosusceptible cattle and other domestic animals, the study underscores the need to incorporate routine diagnosis and treatment for trypanosome parasites in the management of trypanotolerant cattle herds.\n\nCollection of blood was done as per standard operating procedures to ensure animal welfare (https://www.dpi.nsw.gov.au/animals-and-livestock/animal-welfare/general/general-welfare-of-livestock/sop/pigs/health/blood-collection). These are standard operating procedures used in veterinary medicine internationally.\n\nOwners of animals gave their consent before the animals were bled. Prior to jugular venipuncture, the body of the animal was manually restrained by assistants to avoid injury to the animal. Further, the head of the animal was turned by another assistant at a 30-degree angle to the side by holding the animal under its jaw; this is to allow for easy access to the vein, and, to ensure quick, easy and safe collection of the sample causing minimal distress to the animal. To avoid repeated puncturing, time was taken to locate the vein accurately and it was distended by gentle pressure with the fingers before the needle was inserted. After the vein was located, the area was properly cleaned by alcohol to keep bacteria out of the needle insertion site. To ensure that sampling did not result in hypovolemic shock, physiological stress, anaemia and possibly death, only a minimal amount of 4ml of blood was drawn from each animal. To prevent needle-stick injury, a new needle was used for each venipuncture. As soon as blood was removed from the animal, the insertion site was swabbed with alcohol to remove any bacteria that might have entered the area during the drawing of blood. Pressure was applied for 30–60 seconds immediately following withdrawal of the needle; the pressure caused blood to clot, thereby preventing bleeding.\n\nAt the time this work was conducted (2011) there was no requirement by the University of Cape Coast for ethical clearance for work with animals. Therefore, we followed internationally accepted procedures such as those outlined in \"Guidelines for the Welfare of Livestock from which Blood is Harvested for Commercial and Research Purposes” published by the New Zealand National Animal Ethics Advisory Committee in 2009 (https://www.mpi.govt.nz/dmsdocument/1475-guidelines-for-the-welfare-of-livestock-from-which-blood-is-harvested-for-commercial-and-research-purposes).\n\n\nData availability\n\nDataset 1: Haematological parameters of T. vivax infected and uninfected WASH and N'Dama cattle at Chegbani and Cape Coast, respectively. 10.5256/f1000research.14032.d197325 (Ganyo et al., 2018)\n\n\nAuthor information\n\nEYG holds a PhD in Parasitology. JNB is an Associate Professor in the Department of Biomedical and Forensic Sciences, and the Dean of the School of Biological Sciences. JV is a scientist and head of the Molecular Biology and Bioinformatics Unit, International Centre of Insect Physiology and Ecology Nairobi, Kenya. DKM is the head of Animal Health, International Centre of Insect Physiology and Ecology, Nairobi, Kenya. PKT is a Professor of Veterinary Epidemiology and Dean, School of Veterinary Medicine, University of Ghana, Legon, Accra, Ghana.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported in part by an International Centre of Insect Physiology and Ecology (icipe) six-month Dissertation Research Internship Programme (DRIP) fellowship funded by the Swedish International Development Cooperation Agency (Sida); and institutional financial support from UK Aid from the UK Government; the Swiss Agency for Development and Cooperation (SDC); and the Kenyan Government. The views expressed herein do not necessarily reflect the official opinion of the donors.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Jerry Oddoye and Abdulai Munkaila of the Veterinary Services Directorate, Ghana, for technical assistance. The help received from managers of the cattle herds at sampling sites is appreciated.\n\n\nReferences\n\nAchukwi MD, Musongong GA: Trypanosomosis in the Doayo/Namchi (Bos taurus and zebu White Fulani (Bos indicus) cattle in Faro Division, North Cameroon. J Appl Biosci. 2009; 15: 807–814. Reference Source\n\nAdam Y, Marcotty T, Cecchi G, et al.: Bovine trypanosomosis in the Upper West Region of Ghana: Entomological, parasitological and serological cross-sectional surveys. Res Vet Sci. 2012; 92(3): 462–468. PubMed Abstract | Publisher Full Text\n\nAhunu BK, Boa-Amponsem K: Characterization and conservation of the Ghana Shorthorn Cattle. A report submitted to the Animal Production Directorate of the Ministry of Food and Agriculture. Accra, Ghana. 2001.\n\nBekele M, Nasir M: Prevalence and host related risk factors of bovine trypanosomosis in Hawagelan District, West Wellega Zone, western Ethiopia. Afr J Agric Res. 2011; 6(22): 5055–5060. Reference Source\n\nBiéler S, Matovu E, Mitashi P, et al.: Improved detection of Trypanosoma brucei by lysis of red blood cells, concentration and LED fluorescence microscopy. Acta Trop. 2012; 121(2): 135–140. PubMed Abstract | Publisher Full Text\n\nBruford MW, Hanotte O, Brookfield JF, et al.: Multilocus and single locus DNA fingerprinting. In: A. R. Hoelzel Editor, Molecular Genetic Analysis of Populations: A Practical Approach, Oxford, IRL Press. 1998; 287–336.\n\nCossic BGA, Adjahoutonon B, Gloaguen P, et al.: Trypanosomiasis challenge estimation using the diminazene aceturate (Berenil) index in Zebu in Gabon. Trop Anim Health Prod. 2017; 49(3): 619–624. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDagnachew S, Bezie M, Terefe G, et al.: Comparative clinico-haematological analysis in young Zebu cattle experimentally infected with Trypanosoma vivax isolates from tsetse infested and non-tsetse infested areas of Northwest Ethiopia. Acta Vet Scand. 2015; 57: 24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDayo GK, Thevenon S, Berthier D, et al.: Detection of selection signatures within candidate regions underlying trypanotolerance in outbred cattle populations. Mol Ecol. 2009; 18(8): 1801–1813. PubMed Abstract | Publisher Full Text\n\nEnyaru JC, Ouma JO, Malele II, et al.: Landmarks in the evolution of technologies for identifying trypanosomes in tsetse flies. Trends Parasitol. 2010; 26(8): 388–394. PubMed Abstract | Publisher Full Text\n\nGanyo EY: Trypanosome infection and genetic variation in major histocompatibility complex DRB3 gene in cattle in Ghana. PhD Thesis, University of Cape Coast. 2014.\n\nGanyo EY, Boampong JN, Masiga DK, et al.: Dataset 1 in: Haematology of N’Dama and West African Short Horn cattle herds under natural Trypanosoma vivax challenge in Ghana. F1000Research. 2018. Data Source\n\nJain NC: Essentials of Veterinary Hematology. Philadelphia, Lea and Febiger. 1993; 417. Reference Source\n\nJain NC: Schalm’s veterinary hematology. Philadelphia, Lea and Febiger. 1986; 1221. Reference Source\n\nMaganga GD, Mavoungou JF, N’dilimabaka N, et al.: Molecular identification of trypanosome species in trypanotolerant cattle from the south of Gabon. Parasite. 2017; 24: 4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahama CI, Desquesnes M, Dia ML, et al.: A cross-sectional epidemiological survey of bovine trypanosomosis and its vectors in the Savelugu and West Mamprusi districts of northern Ghana. Vet Parasitol. 2004; 122(1): 1–13. PubMed Abstract | Publisher Full Text\n\nMahama CI, Mohammed HA, Abavana MA, et al.: Tsetse and trypanosomosis in Ghana in the twentieth century: a review. Rev Elev Med Vet Pays Trop. 2003; 56(1–2): 27–32. Reference Source\n\nMaichomo MW, Ndungú JM, Ngare PM, et al.: The performance of Orma Boran and Maasai Zebu crossbreeds in a trypanosomosis endemic area of Nguruman, south western Kenya. Onderstepoort J Vet Res. 2005; 72(1): 87–93. PubMed Abstract | Publisher Full Text\n\nMarcotty T, Simukoko H, Berkvens D, et al.: Evaluating the use of packed cell volume as an indicator of trypanosomal infections in cattle in eastern Zambia. Prev Vet Med. 2008; 87(3–4): 288–300. PubMed Abstract | Publisher Full Text\n\nMasiga DK, McNamara JJ, Laveissière C, et al.: A high prevalence of mixed trypanosome infections in tsetse flies in Sinfra, Côte d’Ivoire, detected by DNA amplification. Parasitology. 1996; 112(Pt 1): 75–80. PubMed Abstract | Publisher Full Text\n\nMbanasor UU, Anene BM, Chime AB, et al.: Haematology of normal and trypanosome infected Muturu cattle in southeastern Nigeria. Nig J Anim Prod. 2003; 30(2): 236–241. Publisher Full Text\n\nMerck Veterinary Manual: The Merck Veterinary Manual. A handbook of Diagnosis, Therapy, and Disease Prevention and Control for the Veterinarian. 1986; 882.\n\nMurray M, Morrison WI, Whitelaw DD: Host susceptibility to African trypanosomiasis: trypanotolerance. In JR. Baker and R. Muller Editors. Adv Parasitol. London, Academic Press, 1982; 21: 1–68. PubMed Abstract | Publisher Full Text\n\nMurray M, Dexter TM: Anaemia in bovine African trypanosomiasis. A review. Acta Trop. 1988; 45(4): 389–432. PubMed Abstract\n\nMwangi EK, Stevenson P, Gettinby G, et al.: Susceptibility to trypanosomosis of three Bos indicus cattle breeds in areas of differing tsetse fly challenge. Vet Parasitol. 1998a; 79(1): 1–17. PubMed Abstract | Publisher Full Text\n\nMwangi EK, Stevenson P, Ndung'u JM, et al.: Studies on host resistance to tick infestations among trypanotolerant Bos indicus cattle breeds in East Africa. Ann N Y Acad Sci. 1998b; 849(1): 195–208. PubMed Abstract | Publisher Full Text\n\nOhaeri CC, Eluwa MC: Abnormal biochemical and haematological indices in trypanosomiasis as a threat to herd production. Vet Parasitol. 2011; 177(3–4): 199–202. PubMed Abstract | Publisher Full Text\n\nOIE: Terrestrial Manual, Chapter 2.4.18 Trypanosomiasis (Tsetse-transmitted). 2013. Reference Source\n\nR Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. 2013. Reference Source\n\nRege JE, Aboagye GS, Tawah CL: Shorthorn cattle of West and Central Africa.I. Origin, distribution, classification and population statistics. World Anim Rev. 1994; 78: 1–14. Reference Source\n\nRoelants GE: Natural resistance to African trypanosomiasis. Parasite Immunol. 1986; 8(1): 1–10. PubMed Abstract | Publisher Full Text\n\nSilva RA, Ramirez L, Souza SS, et al.: Hematology of natural bovine trypanosomosis in the Brazilian Pantanal and Bolivian wetlands. Vet Parasitol. 1999; 85(1): 87–93. PubMed Abstract | Publisher Full Text\n\nSteverding D: The history of African trypanosomiasis. Parasit Vectors. 2008; 1(1): 3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrail JC, Murray M, Wissocq Y: The trypanotolerant livestock network in West and Central Africa. ILCA Bulletin. 1984; 18: 16–19. Reference Source\n\nTurkson PK: Seroepidemiological survey of cattle trypanosomiasis in coastal savanna zone of Ghana. Acta Trop. 1993; 54(1): 73–76. PubMed Abstract | Publisher Full Text\n\nVale GA: Prospects for controlling trypanosomosis. Onderstepoort J Vet Res. 2009; 76(1): 4–45. PubMed Abstract\n\nWaiswa C, Katunguka-Rwakishaya E: Bovine trypanosomiasis in south–western Uganda: packed-cell volumes and prevalences of infection in the cattle. Ann Trop Med Parasitol. 2004; 98(1): 21–27. PubMed Abstract | Publisher Full Text\n\nWorld Bank: Staff appraisal report. Republic of Ghana. National Livestock Services Project. Report No. 11058-GH. Washington DC:The World Bank. 1992; 132. Reference Source\n\nYaro M, Munyard KA, Stear MJ, et al.: Combatting African Animal Trypanosomiasis (AAT) in livestock: the potential role of trypanotolerance. Vet Parasitol. 2016; 225: 43–52. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "33102",
"date": "25 Apr 2018",
"name": "Yahaya Adam",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAnimal trypanosomosis, as reported in the paper, is indeed a major constraint to livestock production systems in Ghana. The theme for the paper is therefore very appropriate. The study design and methods as presented appear to satisfy the standard requirements for a scientific study. The authors suggested the use of these trypano-tolerant animals as control measure for the problem of animal trypanosomosis but I hold a dissenting view to that. The N'dama and WASH cattle, even though are trypano-tolerant as indicated clearly in the study, can not be a solution to the problem for the following reasons:\n1) The T. vivax challenge does not mean a 100% free from the impact as demonstrated in the study (PCV of infected slightly lower than that of uninfected for both breeds of the trypano-tolerant animals used in the study). 2) The N'dama and the WASH breeds are not very productive compared to the other breeds of cattle raised in Ghana, probably due to the T. vivax challenge. 3) The two breads of cattle in the study have the potential status as reservoir of trypanosomes to other breeds of cattle.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "3618",
"date": "27 Apr 2018",
"name": "Paa Kobina",
"role": "Author Response",
"response": "We accept the comments of the Referee. We agree that the two breeds are not as productive as other breeds but having them deal better with trypanosomosis is considered by some livestock keepers as an advantage to keep them and therefore may be preferred.."
}
]
},
{
"id": "33583",
"date": "18 May 2018",
"name": "David M Groth",
"expertise": [
"Reviewer Expertise Immunogenetics",
"Genetics",
"Molecular Biology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAnimal trypanosomosis is an important disease in both animals and humans and understanding the host’s response to such infection is important scientific endeavor. The manuscript describes some basic hematological parameters in these two breeds of animals, which may have some biological significance and lead to some understanding of parameters affecting disease resistance. However, the number of infected animals is quite low in both groups, with only 4/55 WASH and 7/55 N’Dama infected.\n\nThe data supports the conclusions presented.\n\nSuggestions:\n\nThe use of real numbers for each WB cell subgroup parameter rather than reporting only a %. For instance the Eosinophils in Table 2 could be represented as a number rather than a % and this would give a greater feel for the level of absolute differences. If for instance the Eosinophils were represented as a number then the differences between the breeds would be much clearer. Both absolute numbers and % could be used.\n\n5mg/ul is a considerable concentration of ethidium bromide (needs to be checked). Green and Sambrook recommends use of 0.5ug/ml suggest checking the value.\n\nConcentration of the template is not given and should be.\n\nSuggest checking concentrations of the reagents used in the PCR. For instance, 10mM dNTPs is quite a considerable quantity of dNTPs or is it x ul of a 10mM dNTP solution. Typical final concentrations in PCR are between 200-250uM.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "33786",
"date": "01 Jun 2018",
"name": "Sophie Thévenon",
"expertise": [
"Reviewer Expertise parasitology",
"genetics",
"host*parasite interactions",
"trypanosomoses"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript aims at assessing the effect of infections by Trypanosoma vivax on the hematological parameters of N’dama and WASH (West African Short-horn) cattle, raised in two natural environments in Ghana. The purpose is to highlight the trypanotolerant character of these breeds.\nMajor comments:\nThe article suffers from several major problems and is not suited for indexing.\nThe bibliography is quite incomplete: major papers written by Trail et al1-2 and Mattioli et al3-4 are not cited. These authors worked on N’Dama cattle raised in Congo and Gambia respectively and on the relationships between productivity, anemia and infections. Mattioli et al 1998 (Acta Tropica) showed that N’Dama cattle suffered from high tse-tse challenge. Trail et al 1994 showed that N’dama infected by trypanosomes had lower PCV values and lower weight gain than non-infected N’Dama. In addition, an experimental infection published by Berthier et al (2015)5, presented anemia evolution in 5 cattle breeds of West Africa under T. congolense infection and show of N’Dama and WASH were less anemiated than Zebu Fulani and Borgou.\nThe experimental design presented in the article does not bring robust elements on anemia control during T. vivax infection and on the comparison between N’Dama and WASH. There is not any susceptible breed that could be compared to N’Dama and WASH. It is thus not possible to know if the T. vivax strains are highly pathogenic or not. Since N’Dama and WASH are not raised in the same area under the same agro-ecological context, it is not possible to compare these two breeds.\nBecause only 4 and 7 animals were positive to T. vivax PCR, an Anova cannot be used. Only a non-parametric test can be used.\nOther comments:\nThe article of Bouyer et al (2015)6 must be cited in the introduction concerning control method. I do not agree with the sentence “past and current control methods are limited”: the use of trypanocide drugs may be useful and efficient when their usage is adapted to the context (environment and breeding system).\nIn the table I and II and in the text, the terms “positive in T. vivax PCR” and “negative in T. vivax PCR” must be used instead of infected or uninfected. Indeed, PCR has a sensitivity around 75-80% and thus some animals considered as negative in PCR may be infected.\nIn the discussion, the authors propose to incorporate routine diagnosis and treatment. But the problem is that there is not any routine diagnosis, since parasitological methods have a very low sensitivity, and PCR and serology require a well-equipped laboratory with well-trained technicians. Farmers need the support of farmer’s organization and from veterinary public service. The notion of “reservoirs” due to trypanotolerant cattle has never been clearly investigated.\nFinally, the raise of trypanotolerant breeds is important in some agro-ecological context, where tsetse challenge is high and in low input systems. In some areas, only trypanotolerant breed can survive.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNo\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-314
|
https://f1000research.com/articles/7-1233/v1
|
10 Aug 18
|
{
"type": "Research Article",
"title": "Formation of 53BP1 foci and ATM activation under oxidative stress is facilitated by RNA:DNA hybrids and loss of ATM-53BP1 expression promotes photoreceptor cell survival in mice",
"authors": [
"Vaibhav Bhatia",
"Lourdes Valdés-Sánchez",
"Daniel Rodriguez-Martinez",
"Shom Shankar Bhattacharya",
"Lourdes Valdés-Sánchez",
"Daniel Rodriguez-Martinez"
],
"abstract": "Background: Photoreceptors, light-sensing neurons in retina, are central to vision. Photoreceptor cell death (PCD) is observed in most inherited and acquired retinal dystrophies. But the underlying molecular mechanism of PCD is unclear. Photoreceptors are sturdy neurons that survive high oxidative and phototoxic stress, which are known threats to genome stability. Unexpectedly, DNA damage response in mice photoreceptors is compromised; mainly due to loss of crucial DNA repair proteins, ATM and 53BP1. We tried to understand the molecular function of ATM and 53BP1 in response to oxidative stress and how suppression of DNA repair response in mice retina affect photoreceptor cell survival. Methods: We use the state of art cell biology methods and structure-function analysis of mice retina. RNA:DNA hybrids (S9.6 antibody and Hybrid-binding domain of RNaseH1) and DNA repair foci (gH2AX and 53BP1) are quantified by confocal microscopy, in retinal sections and cultured cell lines. Oxidative stress, DNA double strand break, RNaseH1 expression and small-molecule kinase-inhibitors were used to understand the role of ATM and RNA:DNA hybrids in DNA repair. Lastly, retinal structure and function of ATM deficient mice, in Retinal degeneration 1 (Pde6brd1) background, is studied using Immunohistochemistry and Electroretinography. Results: Our work has three novel findings: firstly, both human and mice photoreceptor cells specifically accumulate RNA:DNA hybrids, a structure formed by re-hybridization of nascent RNA with template DNA during transcription. Secondly, RNA:DNA-hybrids promote ataxia-telangiectasia mutated (ATM) activation during oxidative stress and 53BP1-foci formation during downstream DNA repair process. Thirdly, loss of ATM -in murine photoreceptors- protract DNA repair but also promote their survival. Conclusions: We propose that due to high oxidative stress and accumulation of RNA:DNA-hybrids in photoreceptors, expression of ATM is tightly regulated to prevent PCD. Inefficient regulation of ATM expression could be central to PCD and inhibition of ATM-activation could suppress PCD in retinal dystrophy patients.",
"keywords": [
"RNA:DNA-hybrids",
"ATM",
"53BP1",
"Genome instability",
"Oxidative stress",
"DNA repair",
"Photoreceptor cell death",
"Retinal degeneration",
"Retinitis pigmentosa"
],
"content": "Introduction\n\nPhotoreceptors are light-sensory neurons and one of the six major cell types in the retina, which are organized into stratified layers (Figure 1a). Mutations in more than 250 genes, both retina-specific and ubiquitously expressed, are associated with inherited retinal dystrophies (IRDs) (https://sph.uth.edu/retnet). These genes have approximately 20 known cellular functions1. Mutations in genes coding for proteins involved in retina-specific functions (i.e., phototransduction, visual cycle, retinal development, etc.) could well explain photoreceptor dysfunction or degeneration. But, the reasons that mutations in ubiquitously expressed genes can result in PCD remains unresolved.\n\n(a) Labelled diagram shows stratified organization of retinal layers and cell types. (b) Immunofluorescence with DNA:RNA hybrid specific S9.6 antibody of mice retinal cells. Tissue was proteolysed and disintegrated for staining (see Methods). Photoreceptors (outer nuclear layer cells) can be identified by typical inverted chromatin, as seen by DAPI stain.\n\nA prime example is mutations in ubiquitously expressed members of the U4/U6-U5 tri-snRNP particle (PRPF31, PRPF3, PRPF4, PRPF6, PRPF8) and splice-complex proteins (SNRNP200 and PAP1), which are the second-most frequent cause of autosomal dominant forms of retinitis pigmentosa (adRP) after mutations in rhodopsin1–3. An exception is DHX38, a spliceosome complex associated RNA-helicase, which has an autosomal recessive pattern of inheritance4.\n\nHeterozygous mutations in human PRPF genes does not affect any cell type but specifically causes PCD. What makes photoreceptor cells more susceptible to mutations in PRPF is presently unknown. Wheway et al. reported, using mouse cells, that PRPF6, 8, and 31 are important for ciliogenesis5. Photoreceptor cells are specialized sensory cilia and defects in ciliogenesis can primarily affect photoreceptor biogeneisis and survival, as seen for mutations in genes such as CEP290 and BBS1. However, unlike those in CEP290, mutations in PRPF do not cause PCD in mice. Mice knockout models of PRPF31, PRPF3 and PRPF8, as well as knock-in models containing analogous mutations, do not show any photoreceptor degeneration6,7. It was also hypothesized that PRPF-mutations could have more pronounced effect on the splicing of photoreceptor-specific genes, thus specifically deteriorating the health of photoreceptors. As observed, haploinsufficiency of PRPFs causes genome-wide splicing defects and does not explain the photoreceptor-specific phenotype1,8,9. Evidently, some other factor or combination of factors causes the higher vulnerability of photoreceptors to the loss of splicing proteins.\n\nInefficient splicing or defects in mRNP biogenesis can lead to genomic instability by an RNA:DNA-hybrid-dependent mechanism10,11. RNA:DNA-hybrids are formed by re-hybridization of nascent RNA with negatively supercoiled DNA behind the moving RNA polymerase, and plausibly accompanied by a single-stranded non-template DNA, to form a three-stranded structure known as R-loop11,12. RNA:DNA hybrids are shown to cause DNA breaks in replication-dependent as well as replication-independent manner, mainly by impeding transcription and replication progression11,13,14.\n\nAlthough RNA:DNA hybrids have not been observed in post-mitotic neurons; their role in neurodegeneration is alleged15. Many proteins involved in RNA:DNA-hybrid dissolution are associated with neurodegeneration. RNA:DNA-hybrid helicases such as senataxin (SETX) and aquarius (AQR), are associated with ataxia with oculomotor apraxia type 2 (AOA2) and type 1 (AOA1)16–18. Nucleotide excision repair proteins, ataxia telangiectasia mutated (ATM) and Fanconi anaemia pathway proteins, which are associated with neurodegeneration, neurodevelopmental defects or microcephaly, have recently been implicated in RNA:DNA-hybrid resolution13,14,19,20. This led us to speculate that RNA:DNA hybrids could be formed in retinal neurons and could play a role in retinal degeneration.\n\n\nResults\n\nWe checked if post-mitotic retinal neurons could accumulate RNA:DNA hybrids. Mice retina was stained with S9.6 (RNA:DNA-hybrid-specific) antibody. Higher levels of RNA:DNA-hybrids are observed in adult photoreceptor nuclei than in the other retinal neurons (Figure 1b). Nuclei of murine photoreceptors have an inverted chromatin organization, with central heterochromatin and peripheral euchromatin (Supplementary Figure 1a). Interestingly, the RNA:DNA-hybrids are observed on the peripheral euchromatin region of photoreceptor nuclei, in proximity to the nuclear membrane (Figure 1b, middle panel). We also checked the expression of RNAseH1 (a ribonuclease) and Senataxin (a helicase), which are enzymes involved in dissolution of RNA:DNA hybrids. Senataxin is expressed mainly in the outer nuclear layer (ONL) of the retina and localized to the euchromatin area of photoreceptor nuclei (Supplementary Figure 1b). Contrastingly, RNaseH1 is mainly expressed in the ganglion cell layer (GC) and inner nuclear layer (INL) of the retina, but not in the ONL (Figure 1a and Supplementary Figure 1c). Extra-nuclear staining of RNAseH1 (likely mitochondrial) in the photoreceptor inner segment is observed.\n\nAs photoreceptors can accumulate RNA:DNA hybrids, we wondered whether loss of spliceosomal proteins could lead to RNA:DNA-hybrid-dependent genomic instability in photoreceptor cells. Of the eight PRPFs, PRPF31 gene aberrations are a major cause of adRP (i.e. RP11) and lead to genome-wide splicing defects8,9.\n\nWe used siRNA-based PRPF31 downregulation in RPE-1 cells and quantified foci formation of early DNA damage and repair markers, i.e. γH2AX (H2AX phosphorylated at ser139) and 53BP1. A significant increase in γH2AXand 53BP1 foci is observed in cells depleted of PRPF31 (Figure 2a, b and Supplementary Figure 2a). We next analyzed primary cells from the stromal vascular fraction (SVF) of PRPF31-deficient mouse models (Prpf31+/- and Prpf31+/A216P)6,7. The Prpf31-A216P variant reduces the stability and nuclear localization of U4/U6-U5 tri-snRNP complex21. Primary SVF cells obtained from heterozygous Prpf31+/A216P mice show clear accumulation of γH2AX (Figure 2c, d). Notably, expression of active RNaseH1 in these cells significantly reduced both γH2AX and 53BP1 signal. This indicates the role of RNA:DNA hybrids in genomic instability observed in the absence of functional PRPF31. Cells obtained from Prpf31+/- mice also exhibit accumulation of γH2AX (Supplementary Figure 2b).\n\n(a and b) γH2AX and 53BP1 foci analysis in PRPF31 siRNA-transfected RPE-1 cells. (c and d) γH2AX and 53BP1 foci analysis in vasculo-stromal fraction derived primary cells from Prpf31+/A216P mice (Prpf31-ki). (e) γH2AX and 53BP1 foci analysis in retina from Prpf31+/A216P mice on postnatal day 20. All column bars represent the mean. For (a-d) “n”, mentioned on respective column, signify number of cells analyzed from two independent experiments. For (e) n=16 for each column and signify number of retinal sections analyzed; acquired from n=4 eyes. Error bars represent Standard error of Mean (SEM). *P≤0.05; **P<0.01, ***P<0.001 using Mann-Whitney test (a,b), Kruskal-Wallis test followed by Dunn’s post hoc test (c, d); and two tailed unpaired Student’s t-test (e).\n\nWe next assessed whether PRPF31-deficient photoreceptors also show increased genomic instability. But unlike RPE-1 and primary SVF cells; no elevation in genomic instability was observed in the retinal neurons of adult Prpf31+/A216P mice (data not shown). In the retina of postnatal day 20 mice, an increase in γH2AX and 53BP1 foci was observed (Figure 2e). Notably, 53BP1 is not expressed in the ONL (composed of photoreceptor nuclei), except in the apical (outermost) layer of photoreceptors nuclei (Figure 2e, arrow).\n\nThe fact that adult mouse photoreceptors can accumulate RNA:DNA hybrids, but do not show any accumulation of genomic instability markers is puzzling. To understand why this is the case, we looked at DNA repair markers in irradiated photoreceptor cells. As reported previously22, we also observed that mouse photoreceptor cells have inefficient DNA repair. Irradiation induces γH2AX formation in all retinal cell types, but localizes only to the euchromatin region (Figure 3a, b). As aforementioned, 53BP1 is not observed in ONL (containing the nuclei of photoreceptors) and the outer half of the INL (composed mainly of horizontal cell nuclei) (Figure 1a, Figure 3a). Only at 24 h post irradiation did the γH2AX signal disappear from the nuclei of photoreceptors (Figure 3c). We also checked for irradiation-induced cell death by terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) assay. The retinal neurons show resistance to irradiation-induced cell death and, unlike the ganglion cell layer, ONL showed no TUNEL-positive cells until 24 h post-irradiation (Supplementary Figure 3).\n\n(a) Immunofluorescence performed using anti-γH2AX and 53BP1 antibodies in mice retina 1hr after exposure to 5 Gyrase of ionizing radiation. γH2AX appears in all cell types in response to DNA breaks. 53BP only observed in the ganglion cell layer (GC) and inner strata of inner nuclear layer (INL) (see Figure 1a). Zoom in show foci formation in cells expressing 53BP1. (b) H2AX phosphorylation in all retinal cell types is euchromatin specific. (c) Kinetics of DNA repair in mice retina. Post-Irradiation, mice were sacrificed at indicated times and retinal sections were analysed for γH2AX and 53BP1. Left panel show all nuclear layers of retina; right panel show zoom in images to emphasize foci formation. Two mice were used for each condition, which were processed and stained together. Random image were taken using confocal microscope on each eye.\n\nATM is a major PI-3 kinase for post mitotic neurons. We observed that in photoreceptors, most of irradiation-induced H2AX phosphorylation is independent of ATM22. Absence of functional ATM or the presence of ATM inhibitor does not inhibit H2AX phosphorylation in the ONL of the retina (Figure 4a, b). Consistently, western blot analysis of micro-dissected neural retinas showed that 53BP1 and ATM levels were depleted around post-natal day 20 (Figure 4c). Although analysis of cDNA from the neural retina shows that an alternative spliced form of ATM, lacking the N-terminal PRD and C-terminal FATC domains, could be present (Figure 4d).\n\n(a) Retinal explants pretreated with ATM inhibitor (ATMi) or PI3K inhibitor (DDRi) are irradiated and sections analyzed for γH2AX. (b) ATM knockout adult mice retina analyzed for irradiation dependent γH2AXaccumulation. (c) Western blot analysis of ATM and 53BP1 in micro-dissected neuronal retina. LC, Coomassie-equivalent staining of gels used as loading control (representation of n=2 independent repeats). (d) Cartoon depiction of domains of ATM protein. (e) Neural retina of mice was microdissected and cDNA was prepared. PCR performed using indicated domain-specific primers on ATM mRNA. Unlike FATC and TAN domain coding mRNA; the FAT-domain containing mRNA could be observed even in 5-month-old adult mice retina, indicating the presence of an alternatively spliced form of ATM in the retina.\n\nInefficient DNA repair and an absence of ATM is not expected in photoreceptors, especially considering that markers of oxidative stress are most pronounced in the cerebellum and retina1,23,24. Oxidative DNA damage is the most common cause of DNA damage in post-mitotic neurons and could result in single-strand breaks, a major cause of neurodegeneration. Antioxidant treatments have proved to be promising neuroprotective strategies for many retinal dystrophies25,26.\n\nATM is a sensor of oxidative stress27,28. DNA topology or subtle chromatin changes could also activate ATM, even in absence of a DNA break29. Activated ATM can signal DNA repair, cell cycle arrest and also cell death mainly by a p53 dependent pathway30,31. Noticeably, ATM is shown to promote RNA:DNA hybrid formation on transcribed sites by removal of spliceosomal complex32. We wondered if an absence of ATM is linked to the presence of RNA:DNA hybrids and high oxidative stress in mice photoreceptors.\n\nTo assess whether RNA:DNA hybrids can directly affect ATM function during oxidative stress; we looked at H2O2 induced ATM activation in presence and absence of ectopically expressed RNAseH1. Notably, removal of RNA:DNA hybrids by RNaseH1 overexpression completely abolishes ATM phosphorylation at Ser1981 after H2O2 treatment, as observed by western blotting and immunofluorescence (Figure 5a, b). The suppression was stronger than that obtained using an ATM-specific inhibitor. We also observed that the H2O2-induced activation of ATM was more prominent in proximity to nuclear membrane (Figure 5b).\n\n(a) Western blot of cell extracts treated with H2O2 in presence of ATM inhibitor or ectopic RNaseH1 expression. Loading control is Coomassie-equivalent staining of gels, before transfer (detailed in the Methods). (b) Immunofluorescence of cells with antibody against ATM phosphorylated on Ser1981. Images are representative of n=4 (for a), and n=3 (for b) independent experiments.\n\nAs ATM activation depends on chromatin association and release, we quantified nuclear ATM in detergent-permeabilized cells expressing inactive hybrid-binding (HB) domain (which stabilizes RNA:DNA hybrids) or active RNaseH1 (which destabilizes the hybrids)13. The results show high variability, with possibly multiple factors controlling the association of ATM with chromatin; however, it seems that stabilization of hybrids could increase the nuclear retention of ATM (Supplementary Figure 4a). The amount of ATM cross-linked to insoluble pelleted chromatin increased after H2O2 treatment and could be partially released by over-expressing RNAseH1 (Supplementary Figure 4b). This suggests that RNA:DNA hybrids regulate the interaction of ATM with chromatin.\n\nWe next looked at 53BP1, which is also absent from photoreceptors. 53BP1 is a pro-non-homologous end-joining protein and accumulates on damaged DNA sites in an ATM-dependent manner33,34. The formation of 53BP1 foci is crucial for DNA repair, checkpoint activation and cell death33,35,36. Notably, stabilization of RNA:DNA hybrids via the expression of the HB domain significantly increases 53BP1 foci formation. Conversely, RNAseH1 overexpression significantly suppresses 53BP1 foci formation (Figure 6a).\n\n(a) Immunofluorescence using the anti-flag and anti-53BP1 antibody in cells expressing Flag-tagged hybrid-binding (HB) domain or flag-tagged active RNaseH1. 53BP1 foci were quantified in cells expressing HB-domain or RNaseH1. Column bars represent the mean of n number of cells (described on each column), from three independent experiments. Error bars represent SEM. *P ≤ 0.05; **P < 0.01, ***P < 0.001 using the Kruskal-Wallis test followed by Dunn’s post hoc test. (b) Higher zoom representative images show HB-domain and 53BP1 co-localization, specifically in the euchromatin area.\n\nNotably, high-resolution images using confocal microscopy showed that 53BP1 foci consistently co-localize with RNA:DNA hybrids (Figure 6b), indicating the clear affinity of 53BP1 for chromatin regions containing RNA:DNA hybrids. However, unlike ATM, H2O2 treatment does not increase the number of 53BP1 foci, but stabilization of RNA:DNA hybrids by the HB domain increases 53BP1 foci formation, even in the presence of H2O2 (Figure 6a, bar graph).\n\nClearly, DNA repair activity of ATM and 53BP1 depend on RNA:DNA hybrids. We next assessed whether ATM or 53BP1 are crucial for RNA:DNA-hybrid removal. Using the HB domain, we probed and quantified RNA:DNA-hybrids in cells depleted of ATM. Notably, the number of RNA:DNA hybrids decrease in cells depleted of ATM (Figure 7a, b). As shown before, ATM promotes RNA:DNA hybrid accumulation32. This action require ATM phosphorylation, as ATM inhibitor treatment suppresses RNA:DNA hybrid formation (Figure 7c). Expectedly, ATM-depleted cells are also inefficient in forming 53BP1 foci (Figure 7a). The results supports the idea that the ATM-dependent 53BP1 association with chromatin and RNA:DNA hybrid formation are interdependent events.\n\n(a) Immunofluorescence using the anti-flag to quantify nuclear hybrid-binding (HB) domain foci signal in pre-permeabilized RPE cells and quantification of HB domain signal in cells treated with siATM, (b) Western blot showing ATM depletion. (c) Immunofluorescence and HB-domain foci quantification in cells treated with ATM inhibitor. (d) Quantitation of γH2AX foci in cells expressing HB domain or active RNAseH1, in presence or absence of H2O2 dependent oxidative stress. siC, non-targeted control siLuciferease RNA. Column bars represent the mean of n number of cells (described on each column) from three independent experiments. Error bars represent SEM. *P ≤ 0.05; **P < 0.01, ***P < 0.001 using Mann-Whitney test (a, c) and Kruskal-Wallis test followed by Dunn’s post hoc test (d).\n\nRNA:DNA hybrids are primarily considered to be a source of genomic instability. Notably, we observe higher levels of oxidative stress-dependent genomic instability when RNA:DNA hybrids are removed. In Figure 7d, as expected, stabilization of RNA:DNA hybrids by the HB domain could increase γH2AX foci levels. However, when treated with H2O2, HB-domain expressing cells do not show further increases in γH2AX foci. Contrastingly, destabilization of RNA:DNA hybrids by RNaseH1 overexpression results in manifold γH2AX foci accumulation after H2O2 treatment. Similar results are obtained in western blot analysis, wherein the cells over-expressing RNAseH1 show increase in γH2AX after H2O2 treatment (Figure 6a). Very likely, RNA:DNA hybrid formation is a crucial step during ATM-mediated DNA repair, and higher levels of γH2AX in the absence of ATM-activation is a result of prolonged DNA repair and damage accumulation.\n\nATM is crucial for repair as well as induction of apoptosis. In the presence of unrepaired DNA breaks or blocked DNA-protein complex intermediates, ATM can initiate signaling of the cell death pathway31. ATM-deficient cells are defective in irradiation-induced apoptosis30. As aforementioned, the ONL of the mouse retina shows resistance to irradiation-induced cell death, as observed by TUNEL staining (Supplementary Figure 3). ATM-induced cell death is via p53 dependent pathway31. It is known that ectopic expression of p53 in photoreceptor cells promotes photoreceptor cell death37, although the mechanism is unclear38.\n\nWe observed that stabilization of hybrids by the HB domain in RPE-1 cells leads to S-phase accumulation possibly by RNA:DNA-hybrid-dependent inhibition of replication fork progression (Supplementary Figure 5)10,11. Notably, unlike RPE-1 cells, HB-domain expression in HT1180 cells shows an accumulation of sub-G1 apoptotic cells, but no defects in S or G2 phase progression (Supplementary Figure 5). The different outcomes show that stable RNA:DNA hybrids not only create replication stress but can also induce cell death, possibly via the apoptotic pathway.\n\nPhotoreceptors are post-mitotic terminally differentiated neurons, thus RNA:DNA-hybrid-induced replication stress cannot occur. However, in the presence of high oxidative stress in photoreceptors, RNA:DNA hybrid dependent constitutive ATM activation could promote cell death30,31. We thus wondered whether the absence of ATM in retinal neurons promotes photoreceptor cell survival during high oxidative stress and metabolic demands.\n\nAs ATM is only expressed before postnatal day 20 in the mouse retina (Figure 4c, d), we used the rd1 mouse model, which has a mutation in Phosphodiesterase 6B gene and loses 80 percent of photoreceptors before postnatal day 15 due to severe oxidative stress39. We removed ATM in the Pde6b-/- background and analyzed the retina of the animal at p20. In the retina of ATM knockout mice (Pde6b-/- Atm-/-), photoreceptor cell death decelerates and the thickness of the ONL is significantly higher (Figure 8a, b).\n\nATM expression was knockdown in rd1 background and postnatal day 20 mice retina were analyzed by different methods. (a and b) Immunofluorescence using the anti-Recoverin antibody performed on retinal sections of Pde6b-/- ATM+/+ and Pde6b-/- ATM-/- mice. Line scan analysis was performed and outer nuclear layer thickness was quantified using Recoverin staining (detailed in Methods). Column bars represent Mean for, “n” number of line scans, performed on n=32 sections from n=2 Pde6b-/- ATM+/+ and n=3 Pde6b-/- ATM-/- animals. (c) Electroretinogram of mice retina was performed in dark adapted animals and b-wave was quantified. Column bars represent mean for n=18 electroretinogram (ERG) readings from n=9 animals for each Pde6b-/- ATM+/+ and Pde6b-/- ATM-/- genotype; and n=12 ERG readings for n=6 mice with Pde6b-/- ATM-/+ genotype. Error bars represent SEM. *P ≤ 0.05; **P < 0.01, ***P < 0.001 using Mann-Whitney test (b) and one-way-ANOVA followed by Turkey test (c).\n\nWe next resorted to electroretinogram (ERG)-based functional analysis of the retina, which measures the light-induced trans-retinal flux of ions. ERG of Pde6b-/- Atm-/- mice shows a slight but significantly improved b-wave in dark-adapted animals, indicating the protection of retinal function, as compared to Pde6b-/- Atm+/+ animals (Figure 8c).\n\nWe next assessed what happens in human photoreceptors. As observed by S9.6 antibody staining, RNA:DNA hybrids are also observed in human photoreceptor nuclei, though unlike mice photoreceptors they are in central euchromatin region (Supplementary Figure 6a). Notably, unlike the murine photoreceptors, expression of ATM and 53BP1 in adult human retina is observed by immunofluorescence (Supplementary Figure 6b). All data are available on OSF40.\n\n\nDiscussion\n\nRNA:DNA-hybrid-dependent ATM hyper-activation during oxidative stress could promote photoreceptor cell death. The mechanism could be more pronounced in human patients but not murine RP models, as former express higher levels of ATM and 53BP1. This could possibly explain why some mouse models, e.g. Prpf31 mutants, which cause retinitis pigmentosa-11 in humans, do not exhibit photoreceptor cell death1,7. Detailed studies are required, but we anticipate that expression of ATM and 53BP1 are fine-tuned in photoreceptors with respect to RNA:DNA-hybrid and oxidative stress levels (Supplementary Figure 7). Loss of this equilibrium by increased RNA:DNA-hybrid accumulation or higher expression of ATM could cause PCD in progressive retinitis pigmentosa and age-related macular degeneration1,4. Loss of ATM and 53BP1 in mice photoreceptors is a compromise in which decreased sensing for DNA damage is coupled to slower activation of cell death signaling in post-mitotic neurons (Supplementary Figure 7). Possibly, unlike mice, humans are diurnal and live much longer, thus would require better DNA repair, and have tighter uncoupling of gene-expression and DNA repair. It was recently reported that osteoclast cells also show better survival in absence of ATM41. Though there must be additional factors that influence photoreceptor cell survival, we think our work will help to improve understanding of the mechanism underlying photoreceptor cell death, and propose ATM and 53BP1 as targets of neuroprotection strategies in the human retina.\n\nThe molecular mechanisms that sense and resolve RNA:DNA hybrids are unclear and under intense investigation5,6. It is known that RNA:DNA hybrids are intimately linked to genomic instability and replication stress. Our work shows that formation of RNA:DNA hybrids is central to the ATM-53BP1-dependent repair pathway. Thus, RNA:DNA hybrids could affect the efficiency of DNA repair, cell death signaling and checkpoint activation (Supplementary Figure 7). We expect that future work to understand RNA:DNA-hybrid-associated molecular pathways will further elucidate its role in neurodegeneration and ageing.\n\n\nMethods\n\nPrimary vascular stromal fractions (VGF) were obtained by standard protocol. Confluent cultures of primary VGF cells, RPE-1, HCT1180 and HEK293T cells were all maintained in DMEM (#32430027, Gibco™) supplemented with 10% FBS, at 37°C and 5% CO2. To induce oxidative stress, cells were incubated in DMEM containing freshly diluted 500 µM H2O2, for 1 h at 37°C and 5% CO2, before being processed for western blot or immunofluorescence.\n\nSequences of siRNA used were: siLuciferase (negative control), 5′-CGUACGCGGAAUACUUCGA-3′; siATM, 5′-GACUUUGGCUGUCAACUUUCG-3′; and siPRP31, 5’-AGGAUGAGAUCGAGCGCAA-3′. Cells were transfected using Oligofectamine™ Transfection Reagent (#12252011, Invitrogen™) by manufacturer-defined protocol in Opti-MEM™ (#31985070, Gibco) cell culture media and incubated for 48 h before being processed for immunofluorescence.\n\n3X flag tagged HB domain of RNaseH1 and 3Xflag tagged full RNaseH1 lacking the mitochondrial targeting sequence13 were cloned into a pAAV-IRES-hrGFP (Agilent; #240075) plasmid and viral particles were produced using the AAV Helper-Free System (Agilent #240071), as per the manufacturer’s instructions. Cells were transduced in Biosafety level 2 facility and incubated for at least 24 h to allow expression of the tagged protein. For siRNA-based experiments, GFP-positive transduced cells- expressing RNaseH1 or HB domain were FACS-sorted using a BD FACS AriaTM cell sorter (#P-07900125; BD Biosciences), equipped with FACSDiva software (V5.0.3; BD Biosciences). Cells were cultured and transfected as aforementioned. siRNA-treated cells were analyzed 48h after transfection. To study the effect of ATM inhibition in RNA:DNA hybrid accumulation, cells were incubated with complete DMEM containing 10 µM ATM inhibitor (KU55933; Tocris), for 24 h. The plates were light protected and inhibitor-containing medium was replaced every 8 h.\n\nThe primary antibodies and dilutions used were: anti-gH2AX (Clone JBW301, 05-636-Merk Millipore; source, mouse; 1:100), anti-53BP1 (NB100-304, Novus biologicals; source:Rabbit, 1:100) ATM (Sigma, MAT3-4G10/8; source, mouse; 1:200) ATM pSer1981 (Cell Signaling, #4526; source, mouse; 1:100), anti-RNaseH1 (Proteintech, #15606-1-AP, source, rabbit; 1:100) anti-Recoverin (Millipore, AB5585; source, rabbit; 1:500), anti-Flag (Sigma, M2 clone; source, mouse, 1:1,000), anti-RNA:DNA hybrid (S9.6 clone, a kind gift from Andres Aguilera lab and later bought from Kerafast # ENH001; source, mouse; 1:50). (Note that the S9.6 antibody show variation in staining efficiency and care should be taken by adding a only-secondry-antibody control). When used in combination with anti-Flag antibody (to detect Flag-tagged proteins), anti-gH2AX (Cell Signaling, #2577; source, rabbit; 1:100) and anti-ATM (Santa Cruz, #sc7129-Q19; source, goat; 1:50) was used. AlexaFluor® (Molecular Probes) secondary antibodies were used, conjugated to green (488), red (555) and far-red (633) fluorophores (donkey anti-mouse, #A21202; donkey anti-rabbit, #A21206; goat anti-Mouse, #A21422; goat anti-rabbit, #A21070; goat anti-mouse, #A21052), all at a dilution of 1:400. Acquisition settings were adjusted using primary and secondary antibody controls, to rule-out any cross-channel signal detection and autofluorescence. Cells grown on glass coverslips were fixed for 10 min in 2% methanol-free formaldehyde (Sigma) in PBS. Cells were washed and permeabilized with 70% ethanol (20 min at -20°C) and stored in 70% ethanol at 4°C. To stain, cells were blocked for 30 min in blocking solution (PBS with 0.1% Triton X-100 and 5% BSA), followed by overnight primary antibody in blocking solution, washed and 1 h in secondary antibody in blocking solution. After three washes in excess PBS 0.1% Triton X-100, coverslips were blot-dried and mounted in 4,6-diamidino-2-phenylindole (DAPI) containing Vectashield mounting media (Vector Labs H-1201). Images were acquired on Leica TCS SP5 Confocal microscope, equipped with LAS AF software version 2.1 and four lasers sources: 405-Diode, 543 HeNe, 633 HeNe and Argon.\n\nFor immunofluorescence of the mouse retina, animals were euthanized by cervical dislocation and eyes excised and fixed in 4% paraformaldehyde in PBS for 30 min at room temperature. After repeated PBS washes, fixed eyes were incubated at 4°C for 8 h each in 10% sucrose-PBS, 20% sucrose–PBS and 30% sucrose–PBS followed by 50-50 solution of 30% sucrose and optimal cutting temperature (OCT) compound (Tissue-Tex,#4583). The eyes were frozen in 100% OCT in dry ice and stored at -20°C. For cryotome sectioning, performed at -20°C, serial sections 18-mm thick were mounted in five parallel series and stained as described for cells above. For RNA:DNA-hybrid detection using S9.6 antibody, sections were pretreated with Accutase (Sigma; #A6964) for 30 min at room temperature for tissue dissociation and staining was performed as reported by Bhatia et al.13. For TUNEL assay the In Situ Cell Death Detection Kit (Roche, Mannheim, Germany), was used as per manufacturer’s instructions.\n\nMetamorph (Molecular Devices, Version 7.1) image analysis software was used to quantitate foci, signal intensity and signal area with inbuilt functions, i.e. Granularity and Line Scan. In brief, for foci analysis, maximum projection of z-stacks is created (with LAS AF Leica software) and saved as .tiff images. The images are opened on Metamorph and converted to 8-bit format by inbuilt \"Multiply-function\". Background subtraction is done based on unstained area in the image. A mask is created, based on DAPI image, to assign the nuclear area. Using the Granularity function, the signal (with minimum granule size in pixels) is quantified per nuclei and updated in a linked excel sheet. To measure ONL thickness in mice retina using metamorph, recoverin-stained retinal section images were opened and three lines per image were drawn perpendicular to the ONL (i.e. recoverin-stained ONL) using the LineScan-tool. The intensity measurements are automatically documented in a excel file. Length of line with pixels containing positive value (for recoverin signal) was used to measure the thickness of ONL. The length in pixels was converted in uM by multiplying with the conversion factor, obtained by measurement of the scale-bar on the image. These measurements were later used for quantitative analysis in GraphPad Prism 5 package software.\n\nMice were anesthetized by sub-cutaneous injection of ketamine/xylazine (80/12; mg/kg body weight) and exposed to 5 Gy of gamma irradiation (BioBeam-8000: Gamma Service Medical GmbH). After irradiation, mice were returned to their cages for 1 h (unless specified; as in Figure 3c). Thereafter mice were euthanized and the eyes were processed as mentioned before. To pre-treat the retina with small molecule kinase inhibitors, the cornea is dissected out as described by Donovan et al.42. Explants were incubated for 1 h in DMEM+10% FBS medium with 10 µM ATMi (KU55933; ATM inhibitor from Tocris) or 5 µM of DDRi (PI3 kinase inhibitor; a gift from Prof. Oscar Fernandez Capatillo, CNIO, Madrid). Explants were then exposed to 5 Gy of gamma irradiation and incubated in DMEM (with ATMi or DDRi) for 1 h at 37°C and 5% CO2. Thereafter, the retinal explant were processed as aforementioned for mouse eyes.\n\nCells were recovered by Accutase (Sigma, #A6964) treatment for 5 min at 37°C. For the neural retina, the two animals for each age group were euthanized by cervical dislocation (postnatal day 10, 20, 60, 270) or by decapitation (postnatal day 5). Eyes were removed and neuronal retina was separated by microdissection as described by Donovan et al.42. Cells/tissues were lysed using RIPA lysis buffer (Sigma, #R0278) for 30 min on ice and centrifuged at 4°C (Eppendorf, #5415R) to remove the debris. Protein concentration in supernatant was measured on a Nanodrop ND-100 Spectrophotometer (NanoDrop Technologies). Normalized volume of samples were appropriately diluted with 4X-SDS-PAGE sample buffer, to obtain equal concentration and 1X resultant sample buffer concentration. Samples were heated for 10 mins at 90°C and loaded on Mini-PROTEAN® TGX Stain-Free™ (4–20% gradient) Precast Gels (BIO-RAD, # 456-8096). After electrophoresis, gels were scanned under ultraviolet light to get Coomassie-equivalent staining, which was used as the loading control. Overnight transfer was performed in Tris-Glycine buffer with 5% methanol, onto an Amersham HybondTM-P 0.45 blotting membrane. After the transfer, the gel was again UV exposed to check the efficiency of transfer. For blocking, PVDF membrane was incubated in SuperBlock™ (PBS) Blocking Buffer (Thermo Scientific™,# 37515) with 0.1% Tween-20, for 30 minutes at room temperature. Primary antibody used were anti-ATM (Sigma, MAT3-4G10/8, 1:1000), anti-ATM pSer1981 (Cell Signaling, #4526, 1:1000), anti-H2AX (Clone JBW301, 05-636-Merk Millipore, 1:500), anti-53BP1 (NB100-304, Novus biologicals, 1:1000), anti-RNaseH1 (Proteintech, #15606-1-AP, 1:1000), anti-β-actin (Sigma,#A3854, 1:1000). Antibodies were diluted in PBS with 0.1% Tween-20 and 2% BSA (Calbiochem,#12659), overnight at 4°C. Anti-mouse and anti-rabbit, HRP-conjugated secondary antibodies (Sigma, #A4416 and #A0545, respectively) were used at 1:20,000 dilution for 1 h at room temperature. Probed PVDF membranes were treated for 5minutes with Western BrightTM ECL reagent (Advansta, #K12045-D20) and imaged using Amersham HyperfilmTM (GE-Healthcare,#28906844) and Hyperprocesor (Amersham Biosciences, Model SRX-101A). To re-probe with another antibody, membranes were stripped using RestoreTM western blot stripping buffer (Thermo Scientific, #21059) for 10 min at room temperature.\n\nThe neural retina of two mice for each time point (at postnatal days 10, 20, 60 and 270) was microdissected; 1 eye each was used for western blot and one eye was used to prepare total RNA. RNA was quantified using Nanodrop ND-100 Spectrophotometer (Manufactured by NanoDrop Technologies), normalized and reverse transcribed to cDNA using QuantiTect Reverse Transcription Kit (Qiagen,#205311). Primers were designed using the Primer3Plus online tool, on the selected exons of ATM cDNA sequence (from the ENSEMBL database). Primers target a region on ATM mRNA, relative to domains on protein sequence. I.e. betwen exon 2–4 for TAN domain; between exon 62–64 for FATC domain and between exon 13–17 for control FAT domain. The primers (sequence given below) were validated using UCSC in silico PCR, and selected if they produce single PCR product from mouse transcriptome sequence and no product from mouse genomic sequence. PCR reaction was performed using MyTaqTM red DNA polymerase, using standard PCR condition with 54°C of annealing temperature (all primers have have Tm between 58°C and 60°C); for 30 sec and extension at 72°C for 1min, for 30 cycles, for all primers.\n\nPrimer sequences are as follows. ATM_FATC_domain: 5’-TGCTGACCATTGTAGAGGTTCT-3’ (forward) and 5’-CAGTTCAGTGTGTATGCGGC-3’ (reverse); ATM_TAN_domain: 5’-AGTGGATAAATTTAAGCGCCTGA-3’ (forward) and 5’-AGCCACTGTTGCTGAGATACT-3’ (reverse); ATM_FAT_domain: 5’-TCTGAAACCCTTGTCCGGTG-3’ (forward) and 5’-AGGACTCATGGCACCAACAG-3’.\n\nAll experiments are performed in compliance with Spanish and European Union laws on animal care in experimentation and approved by the Committee of Animal Experimentation, CABIMER, Seville, Spain. Mice are maintained in Specific Pathogen Free (SPF) conditions and health status is monitored through a comprehensive surveillance program. Cages (4–6 adults per cage), bedding (sawdust) and water (sterilized by autoclaving) and food (irradiated Rodent VRF1) were changed weekly (every Tuesday). Room temperature (21°C) and 12–12 light cycle (6 pm - 6 am) were maintained. Equipment and material that need to enter the SPF zone were decontaminated by hydrogen peroxide vapor. The number of mice used in the study was kept to a minimum and sample size calculations were not performed prior to experiments due to lack of equivalent datasets and information about expected results. Power analysis was performed retrospectively to confirm that the power of statistical analysis is >0.8, at alpha=0.05.\n\nRetinal sections for DNA response analysis are from C57BL/6J, adult wild-type mice (8–10 weeks) both male and female (22±3 g). S.S. Bhattacharya lab at University College London, previously reported PRPF31 mouse models 6, in collaboration with Charles River (France). Mice were procured and transferred to the SPF animal facility of CABIMER, Seville, Spain. Prpf31 A216P/+ mice were on a mixed background of 129S2/Sv (source of stem cell used for mutation incorporation) and C57BL/6J (background used from crossing the chimeric mice). Prpf31 ± knockout mouse has BALB/c, 129S2/Sv and C58BL/6J mixed background; as they were generated by crossing the Prpf31 A216P/+ mouse with a BALB/c mouse expressing Cre recombinase (BALB/c-Tg(CMV-Cre)1Cgn/J)6. Retinal sections were obtained from 20-day old Prpf31 A216P/+ mice. VSF cells were isolated from adult PRPF31 mouse models (8–12 week). ERG and retinal thickness analysis were done on 20-day old Pde6b-/- ATM+/- knockout mice. Pde6b-/- ATM+/- were produced by crossing ATM+/- knockout mouse (originally created in a mixed background (129/SvEv and NIH Black Swiss)43 with Pde6b-/- mice (FVB/Ncrl, from Charles River, an early onset retinal degeneration strain homozygous for allele Pde6brd1). ATM mice were obtained from Felipe Cortes lab (CABIMER, Seville, Spain) and genotyped as reported before43\n\nTo study DNA repair response in irradiated mice, for two independent repeats are performed. For each repeat all animals were used from a single litter of C57BL/6J adult wild-type mice of same-sex (all male or all female).\n\nWeaning was performed for all strains at postnatal day 20. For PRP31 mouse models and Pde6b-/-ATM+/- mice, which were studied at postnatal day 20 (15±2 g), pups were always with mother. For ERG analysis, 24 h dark adaptation was performed with mother. For Prpf31 A216P/+ mice retinal sections, two mice each i.e. wildtype and heterozygous A216P/+ (i.e. four total mice i.e. 8 eyes) are analyzed. For VS fraction cells, two mice for each mice model respective wild-type were used i.e. 8 eyes. Independent cultures were maintained, for up to 10 passages.\n\nFor ERG analysis, total 24 mice i.e. 48 eyes (from nine ATM+/+, six ATM+/- and nine ATM-/- mice) were used, all coming from three litters. Males and females were not distinguished. Animals were genotyped after ERG analysis was performed to reduce any bias. For morphological preservation, retinal thickness analysis was performed for two Pde6b-/- ATM+/+ and three Pde6b-/- ATM-/- were used (total five mice). Metamorph software-based automated Linescan analysis of recoverin staining was performed for retinal thickness analysis, described above.\n\nThe primary result of our study is that molecular function of ATM and 53BP1 depend on RNA:DNA hybrids; and removal of RNA:DNA hybrids completely inhibit ATM activation during oxidative stress. The primary outcome from mice models is the partial preservation of retinal structure and function after ATM-removal in rd1 mice. The additional outcomes are: suppressed DNA repair response and loss of ATM and 53BP1 expression in retinal neurons, comparison with human photoreceptors and presence of RNA:DNA hybrids close to the proximity of nuclear membrane of murine photoreceptor cells. All efforts were made to ameliorate the suffering of animals.\n\nElectroretinography was performed using a Color Dome Ganzfeld (Diagnosys LCC, MA, USA) as detailed before by Lourdes et al.44. Briefly, to evaluate scotopic vision, mice were dark-adapted overnight and anaesthetized by sub-cutaneous injection of ketamine/xylazine (80/12; mg/kg body weight). A drop each of 10% phenylephrine and 1% tropicamide were used to dilate the pupils of the animal. To detect retinal response, the mouse was placed inside the ColorDome Ganzfeld (Diagnosys LCC, MA, USA) and electrodes were touched on the surface of the corneas, pre-treated with a hydrating agent (1% methylcellulose). A single pulse of white-flash (6500 K) was used for stimulation, with the stimulus strength of 0.1, 1 and 10 lux. An average of 15 responses was made, with an inter-stimulus interval of 15 s.\n\nThe indicated statistical tests were performed using GraphPad Prism 5 package. In brief, the Kolmogorov-Smirnov normality test was used to check distribution of data (alpha=0.05). A parametric two-tailed Student’s t-test was used for data with normal distribution; otherwise, a non-parametric Mann-Whitney test was applied. For multiple comparisons, one-way-ANOVA followed by Turkey test was used if data had normal distribution; otherwise a Kruskal-Wallis test followed by Dunn’s post hoc test was used. Statistical significance is marked by one, two or three asterisks, indicating P < 0.05, P < 0.01 or P < 0.001, respectively.\n\n\nData availability\n\nAll data associated with this study, including all raw microscopy images and uncropped western blots, are available on OSF: http://doi.org/10.17605/OSF.IO/X3CM740. Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work is supported by Junta de Andalucia, Spain (P09CT54967) and Juan de la Cierva grants (IJCI-2014-22549) from the Ministry of Economy, Industry and Competitiveness, Government of Spain.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary Figure 1. Expression of DNA:RNA-hybrid-specific enzymes and chromatin organization in the inner nuclear layer (INL) and outer nuclear layer (ONL) neurons of retina. Immunofluorescence in adult mice retinal sections. (a) Localization of euchromatin histone marker (H3K4me3) in mouse retinal neurons and highlight inverted chromatin organization in photoreceptor nuclei (ONL). (b) Senataxin (RNA:DNA helicase) staining of mice retinal sections. Lower panel show zoomed view of ONL. (c) RNASEH1 (RNA:DNA ribonuclease) staining of mice retinal sections. Inner segment (IS) of photoreceptor cells (which is rich in mitochondria) show RNaseH1 expression. However, the major expression is observed in INL cells. and ganglion cell (GC) layer.\n\nClick here to access the data.\n\nSupplementary Figure 2. Genomic instability in absence of Prpf31. (a) Anti-PRPF31 staining showing depletion of PRPF31 in siRNA-transfected RPE-1 cells. (b) Immunofluorescence was performed using anti-γH2AX and 53BP1 antibodies in vasculo-stromal fraction-derived primary cells from Prpf31+/- and Prpf31+/+ mice. A representative image of n=2 (for a) and n=2 (for b) independent repeats is presented.\n\nClick here to access the data.\n\nSupplementary Figure 3. Resistance to irradiation induced cell death observed by terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) staining in the adult mouse retina. Post-irradiation, mice were sacrificed at indicated times and retinal sections were used for TUNEL-assay. Untreated or DNAseI treated mice retinal sections were used as negative and positive control for TUNEL assay. The images are representation of n=2 independent repeats, and the same animals as used for Figure 3c.\n\nClick here to access the data.\n\nSupplementary Figure 4. RNA:DNA hybrids promote the association of ATM with chromatin. (a) Metamorph-based quantification of nuclear ATM staining by immunofluorescence of pre-permeabilized cells, expressing the hybrid-binding (HB) domain or active RNaseH1. (b) Formaldehyde-based crosslinking of chromatin-associated ATM. Cells treated with H2O2 in the presence or absence of RNaseH1 were exposed to formaldehyde and lysed in denaturing conditions, and the lysate was analyzed by SDS-PAGE. Loss of signal due to chromatin crosslinking is observed by western blot analysis. The cytoplasmic protein β-actin was used as a control. The representative image of n=4 independent repeats is shown.\n\nClick here to access the data.\n\nSupplementary Figure 5. Effect of stabilizing RNA:DNA hybrids by hybrid-binding (HB) domain expression on cell cycle. The HB domain was expressed in RPE-1 (upper panel) and HT1180 cells (lower panel). RPE-1 cells show S-phase accumulation and HT1180 cells show the presence of a sub-G1 population in the presence of the HB domain.\n\nClick here to access the data.\n\nSupplementary Figure 6. Immunofluorescence analysis of human retinal section. (a) S9.6 staining to detect RNA:DNA hybrids. The outer nuclear layer (ONL) and inner nuclear layer (INL) are labeled. (b) ATM and 53BP1 expression analyzed by immunofluorescence in human photoreceptor nuclei.\n\nClick here to access the data.\n\nSupplementary Figure 7. RNA:DNA hybrids stimulate the DNA damage response. ATM activation and RNA:DNA hybrid formation are interdependent events and important for DNA repair response during oxidative stress. The model shows that RNA:DNA-hybrids (RNA (orange) and template DNA (red)), are sites for ATM binding and activation. We speculate that cells could dissolve RNA:DNA hybrids using enzymes like RNaseH (right panel) or suppress ATM-expression (left panel) to fine-tune ATM-dependent signaling in DNA repair and cell death.\n\nClick here to access the data.\n\n\nReferences\n\nWright AF, Chakarova CF, Abd El-Aziz MM, et al.: Photoreceptor degeneration: genetic and mechanistic dissection of a complex trait. Nat Rev Genet. 2010; 11(4): 273–84. PubMed Abstract | Publisher Full Text\n\nVithana EN, Abu-Safieh L, Allen MJ, et al.: A human homolog of yeast pre-mRNA splicing gene, PRP31, underlies autosomal dominant retinitis pigmentosa on chromosome 19q13.4 (RP11). Mol Cell. 2001; 8(2): 375–381. PubMed Abstract | Publisher Full Text\n\nMcKie AB, McHale JC, Keen TJ, et al.: Mutations in the pre-mRNA splicing factor gene PRPC8 in autosomal dominant retinitis pigmentosa (RP13). Hum Mol Genet. 2001; 10(15): 1555–62. PubMed Abstract | Publisher Full Text\n\nRůžičková Š, Staněk D: Mutations in spliceosomal proteins and retina degeneration. RNA Biol. 2017; 14(5): 544–552. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWheway G, Schmidts M, Mans DA, et al.: An siRNA-based functional genomics screen for the identification of regulators of ciliogenesis and ciliopathy genes. Nat Cell Biol. 2015; 17(8): 1074–1087. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBujakowska K, Maubaret C, Chakarova CF, et al.: Study of gene-targeted mouse models of splicing factor gene Prpf31 implicated in human autosomal dominant retinitis pigmentosa (RP). Invest Ophthalmol Vis Sci. 2009; 50(12): 5927–33. PubMed Abstract | Publisher Full Text\n\nGraziotto JJ, Farkas MH, Bujakowska K, et al.: Three gene-targeted mouse models of RNA splicing factor RP show late-onset RPE and retinal degeneration. Invest Ophthalmol Vis Sci. 2011; 52(1): 190–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTanackovic G, Ransijn A, Thibault P, et al.: PRPF mutations are associated with generalized defects in spliceosome formation and pre-mRNA splicing in patients with retinitis pigmentosa. Hum Mol Genet. 2011; 20(11): 2116–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCao H, Wu J, Lam S, et al.: Temporal and tissue specific regulation of RP-associated splicing factor genes PRPF3, PRPF31 and PRPC8--implications in the pathogenesis of RP. PLoS One. 2011; 6(1): e15860. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSantos-Pereira JM, Aguilera A: R loops: new modulators of genome dynamics and function. Nat Rev Genet. 2015; 16(10): 583–597. PubMed Abstract | Publisher Full Text\n\nHamperl S, Cimprich KA: The contribution of co-transcriptional RNA:DNA hybrid structures to DNA damage and genome instability. DNA Repair (Amst). 2014; 19: 84–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDrolet M: Growth inhibition mediated by excess negative supercoiling: the interplay between transcription elongation, R-loop formation and DNA topology. Mol Microbiol. 2006; 59(3): 723–30. PubMed Abstract | Publisher Full Text\n\nBhatia V, Barroso SI, García-Rubio ML, et al.: BRCA2 prevents R-loop accumulation and associates with TREX-2 mRNA export factor PCID2. Nature. 2014; 511(7509): 362–5. PubMed Abstract | Publisher Full Text\n\nSollier J, Stork CT, García-Rubio ML, et al.: Transcription-coupled nucleotide excision repair factors promote R-loop-induced genome instability. Mol Cell. 2014; 56(6): 777–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGroh M, Gromak N: Out of balance: R-loops in human disease. PLoS Genet. 2014; 10(9): e1004630. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichard P, Manley JL: SETX sumoylation: A link between DNA damage and RNA surveillance disrupted in AOA2. Rare Dis. 2014; 2: e27744. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYüce Ö, West SC: Senataxin, defective in the neurodegenerative disorder ataxia with oculomotor apraxia 2, lies at the interface of transcription and the DNA damage response. Mol Cell Biol. 2013; 33(2): 406–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYeo AJ, Becherel OJ, Luff JE, et al.: R-loops in proliferating cells but not in the brain: implications for AOA2 and other autosomal recessive ataxias. PLoS One. 2014; 9(3): e90219. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarcía-Rubio ML, Pérez-Calero C, Barroso SI, et al.: The Fanconi Anemia Pathway Protects Genome Integrity from R-loops. PLoS Genet. 2015; 11(11): e1005674. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStracker TH, Roig I, Knobel PA, et al.: The ATM signaling network in development and disease. Front Genet. 2013; 4: 37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuranová M, Hnilicová J, Fleischer B, et al.: A mutation linked to retinitis pigmentosa in HPRP31 causes protein instability and impairs its interactions with spliceosomal snRNPs. Hum Mol Genet. 2009; 18(11): 2014–23. PubMed Abstract | Publisher Full Text\n\nFrohns A, Frohns F, Naumann SC, et al.: Inefficient double-strand break repair in murine rod photoreceptors with inverted heterochromatin organization. Curr Biol. 2014; 24(10): 1080–90. PubMed Abstract | Publisher Full Text\n\nPrunty MC, Aung MH, Hanif AM, et al.: In Vivo Imaging of Retinal Oxidative Stress Using a Reactive Oxygen Species-Activated Fluorescent Probe. Invest Ophthalmol Vis Sci. 2015; 56(10): 5862–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDu Y, Veenstra A, Palczewski K, et al.: Photoreceptor cells are major contributors to diabetes-induced oxidative stress and local inflammation in the retina. Proc Natl Acad Sci U S A. 2013; 110(41): 16586–16591. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJarrett SG, Boulton ME: Antioxidant up-regulation and increased nuclear DNA protection play key roles in adaptation to oxidative stress in epithelial cells. Free Radic Biol Med. 2005; 38(10): 1382–91. PubMed Abstract | Publisher Full Text\n\nJarrett SG, Boulton ME: Consequences of oxidative stress in age-related macular degeneration. Mol Aspects Med. 2012; 33(4): 399–417. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuo Z, Kozlov S, Lavin MF, et al.: ATM activation by oxidative stress. Science. 2010; 330(6003): 517–21. PubMed Abstract | Publisher Full Text\n\nPaull TT: Mechanisms of ATM Activation. Annu Rev Biochem. 2015; 84: 711–38. PubMed Abstract | Publisher Full Text\n\nKim YC, Gerlitz G, Furusawa T, et al.: Activation of ATM depends on chromatin interactions occurring before induction of DNA damage. Nat Cell Biol. 2009; 11(1): 92–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDuchaud E, Ridet A, Stoppa-Lyonnet D, et al.: Deregulated apoptosis in ataxia telangiectasia: association with clinical stigmata and radiosensitivity. Cancer Res. 1996; 56(6): 1400–1404. PubMed Abstract\n\nRoos WP, Kaina B: DNA damage-induced cell death by apoptosis. Trends Mol Med. 2006; 12(9): 440–50. PubMed Abstract | Publisher Full Text\n\nTresini M, Warmerdam DO, Kolovos P, et al.: The core spliceosome as target and effector of non-canonical ATM signalling. Nature. 2015; 523(7558): 53–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPanier S, Boulton SJ: Double-strand break repair: 53BP1 comes into focus. Nat Rev Mol Cell Biol. 2014; 15(1): 7–18. PubMed Abstract | Publisher Full Text\n\nBaldock RA, Day M, Wilkinson OJ, et al.: ATM Localization and Heterochromatin Repair Depend on Direct Interaction of the 53BP1-BRCT2 Domain with γH2AX. Cell Rep. 2015; 13(10): 2081–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDiTullio RA Jr, Mochan TA, Venere M, et al.: 53BP1 functions in an ATM-dependent checkpoint pathway that is constitutively activated in human cancer. Nat Cell Biol. 2002; 4(12): 998–1002. PubMed Abstract | Publisher Full Text\n\nBouwman P, Aly A, Escandell JM, et al.: 53BP1 loss rescues BRCA1 deficiency and is associated with triple-negative and BRCA-mutated breast cancers. Nat Struct Mol Biol. 2010; 17(6): 688–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVuong L, Brobst DE, Ivanovic I, et al.: p53 selectively regulates developmental apoptosis of rod photoreceptors. PLoS One. 2013; 8(6): e67381. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSahaboglu A, Paquet-Durand O, Dietter J, et al.: Retinitis pigmentosa: rapid neurodegeneration is governed by slow cell death mechanisms. Cell Death Dis. 2013; 4(2): e488. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVlachantoni D, Bramall AN, Murphy MP, et al.: Evidence of severe mitochondrial oxidative stress and a protective effect of low oxygen in mouse models of inherited photoreceptor degeneration. Hum Mol Genet. 2011; 20(2): 322–35. PubMed Abstract | Publisher Full Text\n\nBhatia V: Manuscript 15579 F1000 Research. Open Science Framework. 2018. http://www.doi.org/10.17605/OSF.IO/X3CM7\n\nHirozane T, Tohmonda T, Yoda M, et al.: Conditional abrogation of Atm in osteoclasts extends osteoclast lifespan and results in reduced bone mass. Sci Rep. 2016; 6: 34426. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDonovan SL, Dyer MA: Preparation and square wave electroporation of retinal explant cultures. Nat Protoc. 2006; 1(6): 2710–8. PubMed Abstract | Publisher Full Text\n\nBarlow C, Hirotsune S, Paylor R, et al.: Atm-deficient mice: a paradigm of ataxia telangiectasia. Cell. 1996; 86(1): 159–71. PubMed Abstract | Publisher Full Text\n\nValdés-Sánchez L, De la Cerda B, Diaz-Corrales FJ, et al.: ATR localizes to the photoreceptor connecting cilium and deficiency leads to severe photoreceptor degeneration in mice. Hum Mol Genet. 2013; 22(8): 1507–15. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "37370",
"date": "30 Aug 2018",
"name": "Travis H. Stracker",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript seeks to address the underlying mechanisms of photoreceptor degeneration, particularly that due to ubiquitously expressed proteins. They hypothesize that R-loops may play a role, as several splicing and RNA helicase proteins have been implicated in degeneration and R-loop formation. Using the S9.6 antibody, they find that R-loops are formed in photoreceptor cells of the ONL at higher levels than other cell types. They reason that splicing defects could further increase DNA damage in this high R-loop environment and deplete PRPF31, mutations in which are associated with photoreceptor degeneration. In both RPE1 cells depleted for PRPF31 and primary SVF cells from mice with a PRPF31 knockin mutation, increased levels of gH2AX and 53BP1, markers of DNA breaks, are observed. In the Ki cells, this is suppressed by expression of RNAseH1 that will degrade R-loops, supporting the proposition that the damage is R-loop dependent. However, this signaling was not observed in the ONL layer in vivo but it appears that 53BP1 expression is very low compared to other tissues. They next demonstrate that in response to IR, DNA repair assessed by the appearance and loss of gH2AX is slower and independent of ATM and 53BP1, both of which appear to be lowly expressed in the mature retina. Interestingly, they identify a truncated form of ATM mRNA expressed in the retinal cells that may lead to a null or kinase dead ATM protein. Next they show that peroxide activates ATM in a manner dependent on RNAseH1 (thus, presumably R-loops) but in contrast, 53BP1 foci, that co-localize with R-loops are suppressed by peroxide. Analysis of ATM deficient cells using the HB domain to stabilize R-loops revealed fewer R-loops and 53BP1 foci, suggesting that ATM promotes R-loops formation to recruit 53BP1. Unexpectedly, gH2AX signal is strongly increased in the presence of peroxide and RNAseH1 expression, potentially reflecting deficient repair. Finally, the authors show that ATM loss promotes photoreceptor survival and ONL thickness in a mouse model.\nThe manuscript is overall well written and the data clearly presented. There are a number of new and interesting findings regarding the expression of damage factors and their influence on the retina. However, the logic and flow of the manuscript are somewhat confusing in places. The authors switch between post-mitotic tissues and proliferating cells and there is some apparently conflicting data and overstatements that could use clarification. Here are some specific comments and suggestions to address these points.\nUnder the header “RNA:DNA hybrids specifically accumulate in photoreceptor cells of retina” the authors state: “Higher levels of RNA:DNAhybrids are observed in adult photoreceptor nuclei than in the other retinal neurons”. Based on the data provided, this claim, that is central to the manuscript, is not well substantiated. Essentially a single cell representing multiple cell types is shown in Figure 1b in comparison to the ONL cells. To support this definitive statement that R-loops are specifically higher in the ONL, a quantitative or even semi-quantitative analysis (for example, nuclear intensity in each population) is needed. At a minimum, additional examples of other cell types should be provided to demonstrate some level of specificity. Could the discrepancy in gH2AX and 53BP1 signaling between Ki cells in culture and in vivo be due to the proliferative status of the cells? Are the ONL cells analyzed all post-mitotic? Additions to the text or a Ki67 stain of the retinal layers could address this issue, as DNA replication and mitosis could exacerbate the levels of damage. In Figure 5, the authors show that H202 activates ATM and that this is suppressed by RNAseH1 expression, thus implicating R-loops (very interesting). However, it is not directly shown that H202 actually increases R-loops. This should be stated and referenced if already known or shown directly. The authors state: “Clearly, DNA repair activity of ATM and 53BP1 depend on RNA:DNA hybrids.” I do not agree with this unqualified statement, particularly as their repair activity in relation to these hybrids is not measured in this work. Previous work has implicated R-loops in ATM activation (Tresini et al, Nature 2015) or proposed that R-loops impair ATM-mediated repair (Walker et al, Nature Neuroscience 2017) while the authors here implicate ATM activity in R-loop formation. The cause-effect relationship of ATM, R-loops and repair thus remains somewhat unclear to me. This could be clarified by a more substantial discussion or model to clarify the position of the authors. The observation in Figure 7d that RNAseH1 increases gH2AX dramatically is very interesting but could use clarification, in part because this result is not apparent in Figure 5a, where essentially the same experiment is done if I understand correctly. In 7d, gH2AX levels are reduced by H2O2+RNAseH1 expression in contrast to 5a where they are close to doubled. Considering the results of 5a show reduced ATM activation in the peroxide+RNAseH1 setting, the increase in gH2AX observed in 7d must therefore be almost completely ATM-independent. The results of Figure 4b following IR also show ATM-independent activation of gH2AX following IR. The model that ATM induces R-loops to recruit 53BP1 for repair is therefore not fully supported. Again, this could be clarified via the text or a model. The manuscript starts by focusing on the ONL in Figure 1 and pointing out that it has high R-loops and in Figure 4 showing it has low ATM/53BP1. Therefore, the fact that ATM loss protects the tissue from R-loops (Figure 8) is somewhat counterintuitive if ATM is not there to respond to R-loops or to promote their formation. This would suggest that ATM-dependent tissue loss would occur prior to postnatal day 10 when ATM is expressed. If R-loops are indeed ATM dependent, as shown in Figure 7, it would predict that the high R-loops in the ONL must occur before postnatal day 10. Have R-loop levels been assessed at different time points? The title implicates 53BP1 in photoreceptor survival while this is not actually demonstrated. ATM dependent activation of p53 is independent of 53BP1 and a pro-apoptotic role of ATM in this setting could therefore also be independent of 53BP1.\nMinor issues\nIn Figure 3 legend the caption of (a) states …..after exposure to 5 Gyrase.. rather than 5 Gray. No size markers are shown on western blots, these should be added for reference and reproducibility. The Figure 5, 6 legends make no reference to the cell or tissue type being used for the experiments. Given that ATM and 53BP1 expression are lost in the retina following postnatal day 10, it becomes very relevant the age of the tissues assayed in Figure 2, 3 and 8. This should be stated in the legends.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "37369",
"date": "04 Sep 2018",
"name": "Hemant Khanna",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPhotoreceptors undergo immense oxidative stress throughout the life of an organism. However, it is not clear how these neurons cope with such stress. This manuscript attempts to tackle this question by assessing the role of RNA-DNA hybrid resolution and DNA repair mechanisms during stress. They identify some novel findings, including inefficient DNA repair in photoreceptors and depletion of ATM (a sensor of oxidative stress) levels by P20 in mouse retina. It is also interesting to note that depletion of ATM can have a protective effect in the rd1 mice. These studies also indicate a possible difference between murine and human retinal response to oxidative stress, hence, a different phenotype in mouse models of human RP. The manuscript is well written and sufficiently explained.\nHowever, there are few minor concerns that should be addressed:\n1. The ONL staining in Fig 2E may represent cone nuclei. It would be interesting to comment on the possible involvement of cones in this response. 2. Are the cells used in some experiments primary vascular fraction? If so, were they synchronized as there is a comparison between retinas (post-mitotic neurons) and proliferating cells for oxidative stress responses. It would help to clarify or acknowledge such differences.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "38310",
"date": "27 Sep 2018",
"name": "Florian Frohns",
"expertise": [
"Reviewer Expertise Radiation Biology and DNA damage repair"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript the authors analyze mechanisms of photoreceptor degeneration. As a major finding they show that photoreceptors show a higher amount of DNA:RNA hybrids when compared to other cells. In several experimental steps the authors then provide evidence that DNA:RNA hybrids can activate ATM. The activation of ATM, in turn, is able to induce degeneration of photoreceptors (PRs) in a mouse model for photoreceptor degeneration. On the other hand the authors provide evidence that ATM is not present in PR cells. Thus, they claim that the accumulation of DNA:RNA hybrids in PRs might be the reason for the compromised DNA damage response, mainly the downregulation of ATM and 53BP1 in this cell type.\nThe manuscript is well written and the presentation of the experiments and results is fine. Although the finding on the compromised DNA damage response of photoreceptors has been described earlier, the authors present several new findings that might help to explain these findings in the near future and will significantly contribute to the understanding of DNA:RNA hybrids in the DNA damage response. However, in many cases the authors try to transfer the data from cell lines directly into the retinal tissue which is questionable in several cases. Furthermore, their claim that ATM is not present in PR ignores already published data showing the activity of ATM in this cell type. Thus, in order to improve the work several points have to be addressed:\n1. As presented in Fig. 1, the authors claim that DNA:RNA hybrids are observed in higher levels in photoreceptors. In order to prove this the authors should not only show a representative cross section of a whole retina but also perform quantitative analysis including cells from the INL as well as rod and cone PRs. Furthermore, the authors claim that S9.6 signals in rods are located in peripheral euchromatin of rods. In contrast, in the INL/GCL cell that is also presented in Fig. 1b, S9.6 signals are located in the peripheral heterochromatin. Thus, the authors should comment why in these cells DNA:RNA hybrids would be present in chromatin regions without transcription.\n2. In Fig. 2 the authors present an increasing amount of gH2AX and 53BP1 foci in PRPF31 siRNA-transfected RPE-1 cells and conclude that this increase in DNA damage is due to a higher genomic instability. In contrast, no increased DSBs were found in retinal neurons of PRPF31 deficient mice. The major problem with this finding is that quantification of DNA double strand breaks (DSBs) in the cell lines is not carried out in a cell cycle specific manner. Since the authors did not check the influence of the siRNA treatment on cell cycle distribution and/or progression, these effects might be due to higher amounts of S- or G2-phase cells within one of the samples (since numbers of spontaneous DSBs increase in S-phase due to replication stress and are higher in G2-cells due to the doubled amount of DNA). Thus, the authors should - at least - quantify DSBs specifically for G1-phase cells and exclude S-phase and G2-phase cells from the analysis. The same is true for the primary cells from Prpf31+/A216P mice. Without such a G1-specific analysis it is not useful to compare the data from a proliferating cell line with strictly postmitotic retinal neurons of adult Prpf31+/A216P mice.\nAnother striking finding is the discrepancy between the numbers of gH2AX and 53BP1 foci measured in Fig. 2a and 2b, since many papers have described that the majority of DNA DSBs show both of these markers. The authors should comment on this.\nFinally the authors describe an increased number of gH2AX and 53BP1 foci in PRs of 20 day old Prpf31+/A216P mice when compared with wild type PRs. This is wrong for several reasons: First, the authors present measurements of signal intensities and not the quantification of foci. This should be changed in the text. Second, there is only a significant increase in 53BP1 signal intensity but not in gH2AX. The authors also state that 53BP1 is only expressed in the apical layer of photoreceptor cells. As shown by Müller et al. 2018 (Detection of DNA Double Strand Breaks by γH2AX Does Not Result in 53bp1 Recruitment in Mouse Retinal Tissues) these 53BP1-positive cells are indeed cone photoreceptors. Thus, in order to get a better idea which PR cell type shows this change in 53BP1 levels, the authors should measure 53BP1 levels in both rod and cone PRs independently. Furthermore, high resolution pictures of both cell types should be presented since the actual pictures do not show whether there is indeed focus accumulation or just an increase in pan-nuclear staining.\n3. The authors show that RNAse H1 overexpression has a strong impact on DSB levels. One might wish to see the impact of this overexpression on the level of RNA:DNA hybrids. The authors should present a quantification similar to the one presented in Fig. 7a.\n4. The major message of this manuscript is that the loss of ATM-53BP1 expression promote PR survival. As an evidence for ATM loss the authors present data from western blot analysis of micro-dissected neural retinas in Fig.4. I have several problems with these findings and conclusions: First, on page 11 the authors refer to Donovan et al1 for the procedure of microdissection. After checking this paper one would conclude that the authors have taken the whole retina for this analysis. But in this case these results would not match with the finding that 53BP1 is still present in adult retina (as shown in Fig. 2). Thus, I conclude that microdissection means the isolation of the ONL for western blotting. The authors should clarify their methods. Second, the conclusion that ATM activity is lost seems wrong for the following reason: As Frohns et al2 have shown by the irradiation of Scid mice and additional in vitro experiments, cone and rod PRs show a residual ATM activity that is able to phosphorylate gH2AX after DSB induction. Thus, ATM activity should be present in PRs rendering the major message of this manuscript questionable.\n5. The sequence of data presentation in Fig. 5 and 7 is confusing, making it hard to get a clear model of the role of hybrids, ATM expression and its activation in the presence and absence of DNA damage. The authors should try to create a scheme that helps the reader to understand their data. Furthermore, Fig. 5a indicates decreasing gH2AX levels in H2O2 treated and RNAse H1 overexpressing cells when compared to RNAse H1 overexpression alone. In Fig. 7d the opposite can be observed. Why is that?\n6. In Fig. 6a the authors show that H2O2 treatment alone has no impact on the numbers of 53BP1 foci. This is irritating since the same treatment increases gH2AX signals as measured in WB presented in Fig. 5a. Considering the fact that both, gH2AX and 53BP1 are DSB markers the authors should comment on this contradictory finding. In Fig. 6b the authors state that \"53BP1 foci consistently co-localize with RNA:DNA hybrids\". From 10 53BP1 foci shown in one picture i would score only 5 of them to clearly colocalize with hybrid signals. Thus, the authors should specify that statement or show a quantification.\n7. In Fig. 7d the authors present increasing numbers of gH2AX foci in cells expressing the HB domain. But again the gH2AX analysis is not carried out in a cell cycle specific manner. This is even more of a problem than for Fig. 2 since RPE-1 cells (which likely were used for this analysis, although it is not clearly stated in the legend) show an S-phase accumulation after HB domain overexpression (as shown in Fig. S5). Thus, the increase in gH2AX foci levels might be due to a higher number of S-phase cells (which usually show higher spontaneous DSB levels) in comparison to WT cells.\n8. On page 6 the authors state \"Expectedly, ATM-depleted cells are also inefficient in forming 53BP1 foci (Figure 7a)\". I agree that this is expected but still the pictures presented are of too low resolution to underline that statement (no foci visible). Thus, high resolution pictures should be presented.\n9. In Fig. S6b it is shown that ATM and 53BP1 expression is detectable in human PRs. Hereby, the fact that ATM staining shows a punctual or foci-like staining is irritating. If this would be due to its accumulation at DSBs, why is there no colocalization with the also clearly visible 53BP1 foci (have the authors checked for pATM staining)?\nMinor issues: On page 4 the authors say that the outer half of the INL is composed mainly of horizontal cell nuclei which is not correct. Horizontal cells are a minority in the outer INL. The most common cell type in this region are bipolar cells. The authors should correct that.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "38194",
"date": "05 Nov 2018",
"name": "Maria Tresini",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript Bhatia et al, provide evidence that suppression of the ATM signaling pathway in murine photoreceptors (PR) is a protective mechanism from oxidative stress/RNA:DNA-hybrid induced apoptosis. According to their proposed model, the high levels of oxidative stress PR are naturally subjected to, promote formation of RNA:DNA hybrids which activate ATM and result in p53-mediated apoptosis. Suppression of ATM could therefore have protective effects against retinal degeneration. They support this model by presenting evidence showing that: 1. PRs have higher levels of RNA:DNA hybrids relative to other cells in the retina. 2. Impaired splicing, that has been linked to R-loop formation in other models, can cause RNA:DNA dependent double-strand breaks (DSBs) in retina-derived cell lines, but not in neurons, which they attribute to lack of ATM and 53BP1 expression by the later. 3. In response to ionizing radiation, despite their lower DSB repair capacity, which is independent of ATM, PRs are resistant to apoptosis 4.Oxidative stress in cells activates ATM through an RNA:DNA hybrid dependent mechanism, 5. Absence of ATM delays cell death in a mouse model of retinal degeneration.\nOverall the authors present a large number of interesting data from experiments with mouse retinas and established cell lines (RPE and SVF cells). Their conclusions are consistent with the literature but they also report novel and even controversial findings. The data are well presented and discussed although in some instances a better representation and/or additional controls would strengthen the authors’ arguments. On the somewhat negative side, interesting as the data may be, the large number of issues addressed and experimental models used, result in confusion, loss of coherence and some issues are not addressed or discussed sufficiently. Finally, there are a few statements that should be phrased more carefully as they can be unintentionally misleading.\nSome issues that could be addressed include:\nThe figure supporting the important conclusion that “RNA:DNA hybrids specifically accumulate in photoreceptor cells of retina” should be presented in a manner similar to that in Supplemental figure 1. Can the authors suggest an explanation for the increased RNA:DNA hybrid levels in PRs? Clearly they cannot be formed as a consequence of oxidative stress-activated ATM, as they see a depletion of ATM from ONL. The authors propose that splicing defects (such as silencing or inactivating mutations of PRPF31) induce RNA:DNA-dependent DSBs in cells lines (RPE-1/SVF) but not in retinal neurons. In cultured cells, increased levels of DSBs (assayed here as gH2AX and 53BP1 foci) could be the result of R-loop dependent replication stress and fork collapse. The difference in DSB levels between replicating cells in culture and postmitotic neurons would be more relevant if non-replicating cells (i.e. serum deprived) were used in these experiments or, alternatively, analysis was performed only in G1 cells. Images showing foci formation (e.g. 2a, b, d, e) should be provided at higher resolution as it is impossible to see focal accumulation of gH2AX and 53BP1 in these figures. They should be similar to Fig 3a. The authors report an increase in γH2AX and 53BP1 foci in the retina of postnatal day 20 Prpf31+/A216P mice (text and legend) but the graph actually shows quantification of fluorescence intensity and the γH2AX signal is unchanged. The graph/text should be corrected. The authors use the inactive hybrid-binding (HB) domain of RNaseH1 to stabilize RNA:DNA hybrids. These experiments are central to the paper and RNA:DNA hybrid stabilization should be confirmed by S9.6 immunofluorescence.\n\nThe statement that “Noticeably, ATM is shown to promote RNA:DNA hybrid formation on transcribed sites by removal of spliceosomal complex32” is incorrect. While ATM activated by an R-loop mediated pathway can influence spliceosome dynamics and modulate DNA damage-induced alternative splicing, the influence of ATM activity on R-loops was not addressed in this paper. The authors report that oxidative stress activates ATM by a mechanism that depends on RNA:DNA hybrid formation. This is contradictory to the extensive studies of Tanya Paull (e.g. Guo et al, 20101) showing direct ATM activation by oxidation. The authors should discuss this. Also, a control showing that H2O2 treatment results in increased RNA:DNA hybrid formation would be useful to strengthen their conclusion. The statement “Clearly, DNA repair activity of ATM and 53BP1 depend on RNA:DNA hybrids” should be rephrased. While this may be true under certain conditions it is too general and can be misleading.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/7-1233
|
https://f1000research.com/articles/7-823/v1
|
21 Jun 18
|
{
"type": "Software Tool Article",
"title": "aMatReader: Importing adjacency matrices via Cytoscape Automation",
"authors": [
"Brett Settle",
"David Otasek",
"John H Morris",
"Barry Demchak",
"Brett Settle",
"David Otasek",
"John H Morris"
],
"abstract": "Adjacency matrices are useful for storing pairwise interaction data, such as correlations between gene pairs in a pathway or similarities between genes and conditions. The aMatReader app enables users to import one or multiple adjacency matrix files into Cytoscape, where each file represents an edge attribute in a network. Our goal was to import the diverse adjacency matrix formats produced by existing scripts and libraries written in R, MATLAB, and Python, and facilitate importing that data into Cytoscape. To accelerate the import process, aMatReader attempts to predict matrix import parameters by analyzing the first two lines of the file. We also exposed CyREST endpoints to allow researchers to import network matrix data directly into Cytoscape from their language of choice. Many analysis tools deal with networks in the form of an adjacency matrix, and exposing the aMatReader API to automation users enables scripts to transfer those networks directly into Cytoscape with little effort.",
"keywords": [
"Workflow",
"Reproducibility",
"Cytoscape",
"Interoperability",
"REST",
"Microservice",
"Adjacency"
],
"content": "Introduction\n\nAdjacency matrices are a strong choice for storing pairwise element interaction data, such as those commonly produced by biological analysis tools to represent a weighted network of relationships between biological components (such as genes, conditions, pathways, times, etc.). aMatReader facilitates importing general adjacency matrices (such as correlation, similarity, and difference data) into edge attributes of Cytoscape networks. aMatReader aims to enable users to compile Cytoscape networks from one or multiple matrix files by creating edges or edge attributes for nonzero values in the matrix.\n\nWe upgraded the original aMatReader1 to enable Cytoscape Automation2 by exposing two new REST endpoints3,4, bridging the gap between network matrix data in automation scripts and Cytoscape. With Cytoscape Automation, biologists can manipulate Cytoscape networks via REST calls and create complex workflows in their language of choice (e.g. Python and R). Researchers can then utilize Cytoscape’s filtering tools to remove redundant or unremarkable edges between components, slimming the network and emphasizing stronger relationships to further their analysis.\n\nIn this paper, the Implementation section describes the general approach of aMatReader and its REST endpoints. The Operation section describes how to call the endpoint as a Cytoscape Automation Function. The Use Case section demonstrates how to import adjacency matrices into Cytoscape via the aMatReader endpoint, and the Discussion section describes the import performance.\n\naMatReader translates adjacency matrices into Cytoscape networks by adding edges or edge attributes represented by non-null values in the matrix. The square adjacency matrix is the standard matrix representation of a network. In a square matrix, node labels are stored in the first row and column of a table of size (N+1, N+1). The N×N grid of values within the table contains edge weights. A non-null value at cell (i, j) represents the weight of an edge between node i and node j. An example matrix text file and graph representation can be seen in Figure 1.\n\n(a) In Excel, Python, etc., the matrix is stored as a 2-dimensional array with optional labels. (b) The matrix is exported to a comma delimited file. (c) Importing the undirected matrix into Cytoscape with aMatReader, edges are defined by nonzero values within the upper triangle.\n\nIf an analysis tool is calculating distance or similarity between all pairs of genes within pathways, it will likely produce a square adjacency matrix. Some other examples of square matrices are covariance and correlation matrices. Square matrices are often symmetric, meaning row and column names are similarly ordered, the value at cell (i, j) is the same as that at (j, i), and the values along the diagonal are calculated by comparing an element to itself. A symmetric matrix represents an undirected network, and only refers to the upper triangle for edge attributes.\n\nIn the case of diagonal square matrices, it can be efficient to omit row names, because the row name at index i is identical to the ith column name. This is especially useful when exporting matrices from Python using numpy by inserting node names to the file as a header row. (ref Listing 1 numpy example)\n\nHowever, there are many cases where an adjacency matrix is not square. For example, a correlation matrix between genes and conditions will have different row elements than column elements. The network generated by such a correlation matrix will be directed and bipartite. As a directed network, the entire matrix is used to generate edge attributes, unlike undirected networks that only use the upper triangle.\n\n\nMethods\n\naMatReader is developed to handle a wide spectrum of adjacency matrix formats. To accommodate possible missing or reordered row and column names, we use an adjacency list of node indices as an intermediary data structure. Two separate arrays are used to store source and target node names, where the ith name in the array refers to the node at index i in the adjacency list. Once the matrix has been translated, aMatReader makes a pass through the adjacency list and set edge attributes in the network.\n\nOne constraint of aMatReader is that the parser expects integer or floating point values. Any String, Boolean, or unrecognized values will be considered null and no edge will be created (and no warning will be generated).\n\nThere are two possible options for importing an adjacency matrix into a Cytoscape network. To create a new network from an adjacency matrix file, the caller can use the import Function. If the adjacency matrix defines edge attributes that should be added to an existing network, the extend Function should be used. This is especially useful because an adjacency matrix can only represent one type of edge attribute, and complex networks are often represented by multiple adjacency matrix files.\n\nIf a new network is being created (via the import Function, described in the Operation section below), all of the network nodes are created and named first. Then each non-null value in the matrix is used to create an edge. The edge attribute takes its name from the name of the matrix file that is being imported.\n\nIf a network is being extended (via the extend Function, described in the Operation section below), aMatReader attempts to match row and column names to existing nodes in the network. If no node exists with the given name, a new one is created. Creating edge attributes is handled similarly; an edge between the source and target node is added if it does not already exist, and then the attribute is set.\n\nSome matrix formats add extra information to provide insight to the parser. Matrices produced by cCrepe and MATLAB optionally prefix column names with a period-delimited description of the weights specified by the matrix (e.g. “sim.score” or “q.value” in cCrepe). Additionally, comments can be included in files by inserting a hash symbol at the start of a line. Below is an example gene similarity table produced by MATLAB. More examples of supported format idiosyncrasies can be seen in the documentation provided on the Cytoscape App Store.\n\n\n\nListing 1. Sample adjacency matrix with confusing format. Pipe “|” delimited text file with comments and column name prefix.\n\naMatReader exposes two Functions5 via the Cytoscape CyREST API, import to create a new network and extend to add edge attributes to an existing network. If necessary, the caller specifies the network to be extended as part of the REST URL. The Function endpoints (Figure 2) enable users to manipulate network data as internal Cytoscape data, and are documented in the Apps: aMatReader section of the Swagger document available via Help → Automation → CyREST API.\n\nBoth endpoints expect the same parameters within the JSON body of the request:\n\n\n\nThe files parameter specifies a list of local file paths for matrix files, and is the only required parameter; all other parameters default to the values shown above. Files imported in the same REST call must have the same format and thus be importable with identical parameters. The caller can specify the matrix delimiter (as one of “PIPE”, “SPACE”, “TAB”, or “COMMA”), whether the matrix is symmetric and diagonal and should only import the upper triangle as undirected, whether or not to create edges for zero values (called ignoreZeros), edge interaction type (called interactionName). The payload can also define whether row and column names are present in the file (called rowNames and columnNames). The removeColumnPrefix parameter informs the parser to ignore a common prefix in column names, if it exists.\n\nNote that the interaction type is only set for edges created by the import, and is not set for pre-existing edges in an extend call.\n\nThe aMatReader endpoints return a CIResponse6 according to Cytoscape Automation best practices. If the call succeeds, the CIResponse contains an import result object (as the data element); otherwise, it contains an explanation of the error (as the errors element):\n\n\n\nThe newEdges value contains the number of edges created in the network, and the updatedEdges contains the number of edges that already existed and received new edge attribute(s).\n\nIf the delimiter is unrecognized or any of the matrix files cannot be found or fails to parse correctly, the errors[0].status element returns 404, and the remainder of the errors[0] element contains additional information.\n\nIn order to download and use aMatReader, ensure that you are running Cytoscape version 3.6.0 or later with at least 512MB of free memory to store the matrix before creating the edges.\n\nCalling aMatReader Functions. To import files to a new network, the caller must send an HTTP POST request to /v1/aMatReader/import with a JSON payload object specifying the list of matrix files and any optional parameters listed above. To extend an existing network, the caller must also pass the networkSUID parameter as part of the URL (/v1/aMatReader/extend/{networkSUID}).\n\nNote that the networkSUID must be an integer. The caller can determine a network’s SUID via the /v1/networks endpoint.\n\nExample code is provided in R, Python and as a Bash curl, but can easily be adapted into any language that supports REST calls.\n\nR\n\n\n\nPython\n\n\n\nBash\n\n\n\n\nUse cases\n\nThe simple use case that inspired an upgrade to the original aMatReader app was filtering correlation data in search of similarities among different stages of disease severity. The R package cCrepe gives compositionally corrected scores for all pairwise connections in a dataset, producing an adjacency matrix for similarity score and q-values. Both files contained row names, as well as column names prefixed by “sim.score” or “q.value”, respectively. aMatReader allows the user to import both files into one Cytoscape network, which can easily be filtered with a few extra calls to the core Cytoscape CyREST API (as shown below):\n\nPython\n\n\n\nThe response to the aMatReader function call will create a Cytoscape network with edges that each have a sim_score and q_val column. With a few extra lines, the script can perform filtering and analysis without any interaction from the user. Exposing the aMatReader API enables the researcher to completely automate their analysis process without leaving their script.\n\n\nDiscussion\n\nTranslating from matrix to adjacency list representation decreases the space from O(N^2) to O(N+E) for a network with N nodes and E edges, and the time complexity is similarly decreased for adding the edge attributes to the network.\n\naMatReader was designed with existing adjacency matrix exporters in mind, such as the igraph.Graph.Adjacency7 and pandas DataFrame8 objects. We will continue to improve aMatReader to handle more diverse matrix formats, including string matrices, and to provide a CyREST Function that returns the predicted parameters for importing a matrix file.\n\n\nSummary\n\nIn this paper, we present aMatReader, a general adjacency matrix importer app for Cytoscape 3. aMatReader creates another data entry method that allows researchers to join multiple adjacency matrices, each representing an edge attribute, into one network. aMatReader was developed to handle importing diverse matrix file formats into Cytoscape via automation scripts and libraries.\n\nThe aMatReader CyREST API is composed of two import Functions that enable users to create a new network or extend an existing network with an adjacency matrix defining edge attributes. Users can also import multiple adjacency matrices that represent different edge attributes in the same network simultaneously before calling Cytoscape’s advanced filtering functions.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.\n\n\nSoftware availability\n\nThe aMatReader app is available on the Cytoscape App Store: http://apps.cytoscape.org/apps/aMatReader.\n\nSource code available from: https://github.com/idekerlab/aMatReader.\n\nArchived source code available from: https://doi.org/10.5281/zenodo.12873039.\n\nLicense: GNU Lesser General Public License v2.1.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported with funding from the National Resource for Network Biology (NRNB) under award number P41 GM103504 and the National Institute of General Medical Sciences (NIGMS) under award number R01 GM070743, both assigned to Trey Ideker.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe would like to thank Annika Kroeger for bringing more adjacency matrix use cases to our attention and inspiring the updates to aMatReader.\n\n\nReferences\n\nCytoscape App Store - aMatReader. [cited 22 May 2018]. Reference Source\n\ncytoscape-automation. Github. Reference Source\n\nFielding RT, Taylor RN: Principled Design of the Modern Web Architecture. ACM Trans Internet Technol. New York, NY, USA: ACM; 2002; 2: 115–150. Publisher Full Text\n\nOno K, Muetze T, Kolishovski G, et al.: CyREST: Turbocharging Cytoscape Access for External Tools via a RESTful API [version 1; referees: 2 approved]. F1000Res. 2015; 4: 478. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCytoscape Automation FAQ - What is the difference between Commands and Functions? In: Google Docs. [cited 18 Apr 2018]. Reference Source\n\nApp Developers: Cytoscape Function Best Practices - CI Response. [cited 18 Apr 2018]. Reference Source\n\nCsardi G, Nepusz T: The igraph software package for complex network research. Inter Journal, Complex Systems. 2006; 1695: 1–9. Reference Source\n\nMcKinney W: Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython. “O’Reilly Media, Inc.”; 2012. Reference Source\n\nSettle B, Morris S: idekerlab/aMatReader: aMatReader v1.1.3 (Version v1.1.3). Zenodo. 2018. Data Source"
}
|
[
{
"id": "35308",
"date": "02 Jul 2018",
"name": "Hideo Matsuda",
"expertise": [
"Reviewer Expertise gene expression analysis",
"gene regulatory network inference"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper presents a tool for importing adjacency matrices into Cytoscape. The tool is particularly useful to represent weighted biological networks, such as co-expression relationships of genes, confidence scores of predicted relationships, etc.\nMinor comment: In \"files\" parameter in the JSON code, the character codes of some double quotes (\") are different from others.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3873",
"date": "03 Aug 2018",
"name": "Brett Settle",
"role": "Author Response",
"response": "We would like to thank the referee for positive feedback and helpful comments. The inconsistent character codes will be fixed."
}
]
},
{
"id": "35307",
"date": "03 Jul 2018",
"name": "Kimberly Glass",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nthis is a clearly-written summary of the updated aMatReader API. This tool will be useful for reading in certain types of networks (e.g. correlation networks) into Cytoscape.\nA few additional thoughts:\nA description of how the tool works from within Cytoscape would be a nice addition to the paper for the less-computationally-savvy users of Cytoscape.\n\nIt would be useful for the authors to clarify that for directed networks, rows in the input file represent \"source\" nodes and columns \"target\" nodes.\n\nI found the discussion section too short. Additional discussion on how aMatReader overcomes the limitation described, and/or when there is an advantage to the matrix format over an adjacency list representation would be useful.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3872",
"date": "03 Aug 2018",
"name": "Brett Settle",
"role": "Author Response",
"response": "We would like to thank the referee for the positive feedback and respond to the very helpful comments: We wrote this publication as part of a collection of Cytoscape Automation papers, which focus on enabling researchers to programmatically access CyREST endpoints exposed by apps, over Cytoscape Desktop access. However, I agree that it would be useful to include a section for non-programmers near the beginning. Thank you for noticing this omission. We will add the row/column information to the paper. We will be sure to add more information to the discussion section in the near future. The current implementation of aMatReader translates the matrix to a mapping to better handle sparse matrices. Cytoscape core network importer supports adjacency lists so that format would be preferred, but some workflows output one or many adjacency matrices that need to be imported into Cytoscape all at once."
}
]
}
] | 1
|
https://f1000research.com/articles/7-823
|
https://f1000research.com/articles/7-822/v1
|
21 Jun 18
|
{
"type": "Software Tool Article",
"title": "Copycat Layout: Network layout alignment via Cytoscape Automation",
"authors": [
"Brett Settle",
"David Otasek",
"John H Morris",
"Barry Demchak",
"Brett Settle",
"David Otasek",
"John H Morris"
],
"abstract": "The copycatLayout app is a network-based visual differential analysis tool that improves upon the existing layoutSaver app and is delivered pre-installed with Cytoscape, beginning with v3.6.0. LayoutSaver cloned a network layout by mapping node locations from one network to another based on node attribute values, but failed to clone view scale and location, and provided no means of identifying which nodes were successfully mapped between networks. Copycat addresses these issues and provides additional layout options. With the advent of Cytoscape Automation (packaged in Cytoscape v3.6.0), researchers can utilize the Copycat layout and its output in workflows written in their language of choice by using only a few simple REST calls. Copycat enables researchers to visually compare groups of homologous genes, generate network comparison images for publications, and quickly identify differences between similar networks at a glance without leaving their script. With a few extra REST calls, scripts can discover nodes present in one network but not in the other, which can feed into more complex analyses (e.g., modifying mismatched nodes based on new data, then re-running the layout to highlight additional network changes).",
"keywords": [
"Workflow",
"Reproducibility",
"Cytoscape",
"Interoperability",
"REST",
"Microservice",
"Layout",
"Alignment"
],
"content": "Introduction\n\nThe copycatLayout app1 (hereafter “Copycat”) is an evolution of the existing layoutSaver app2, a visual network aligner that maps node locations from one network view (called the source) to another (called the target). Copycat aims to enable Cytoscape users to compare and contrast networks by highlighting and arranging nodes not common to both networks, and by showing both networks using the same layout, scale and placement.\n\nWe upgraded Copycat to enable Cytoscape Automation3 by exposing a new REST endpoint4,5, thereby bringing the power of network-based differential analysis to automation scripts. Cytoscape Automation enables biologists to incorporate Cytoscape network visualization and analysis functionality (via REST calls) into workflows written in a language in which they are already productive (e.g., Python and R).\n\nFor example, as Cytoscape is called to transform a network (the target), it’s often important to be able to show a correspondence to an original network (the source). The Copycat endpoint does this by invoking the Copycat layout to make common and disjoint nodes obvious and easily available to a workflow. In a workflow that performs multiple network transformations, Copycat can be used to pinpoint network differences after each transformation, thereby boosting confidence in the transformation and illuminating its real effect. Figure 1 illustrates a workflow that might call Copycat repeatedly to create a frame-by-frame movie of network manipulation.\n\n(a) The starting network is loaded into Cytoscape and a simple layout is applied. (b) Data-derived subnetwork. (c) Copycat layout is applied, mapping the starting network layout onto the subnetwork and selecting the unmapped node that was not found in the source.\n\nAs another example, applying Copycat layout to similar networks (i.e., rat, mouse or human) and mapping nodes by homology assignments can help verify and discover new homologous genes and gene pairs. Unmapped nodes might be useful in predicting gene positions based on successfully mapped neighbors.\n\nIn this paper, the Implementation section describes the general approach of the Copycat layout and its REST endpoint. The Operation section describes how to call the endpoint as either a Cytoscape Automation Function or Command. The Use Case section demonstrates the Copycat endpoint use in a real workflow, and the Discussion section describes the layout performance.\n\n\nMethods\n\nThe Copycat endpoint operates on two Cytoscape network views, where a source view contains a source network, and a target view contains a target network. The source view acts as a reference for laying out nodes in the target view.\n\nCopycat matches nodes in a source view to nodes in a target view based on a common node attribute value. For each match, it sets the target node’s X Location, Y Location and Z Location visual attributes to those of the source node. For each source node, the attribute value is found in a column of the source network table, and likewise for each target node. (While the columns themselves may be different, they are assumed to have the same type and meaning.)\n\nCopycat implements a multi-pass algorithm where the first pass creates a map of source node attributes to source view coordinates. The second pass searches for target node attribute values in the source node attribute map and sets the target node’s view coordinates when an attribute match is found.\n\nA node attribute value in the source column is considered to match a value in the target column if they are lexicographically equal. Note that a poor choice of mapping attributes can lead to confusing layouts. For example, when multiple target nodes match a single source node, all of the matching target nodes will be stacked on top of each other at the coordinates of the source node. Also, if a target node matches multiple source nodes, only the last encountered source mapping is used. To avoid visual confusion, avoid having multiple nodes associated with the same attribute value. (A good choice for a mapping attribute is the node’s name if each node has a different name. A bad choice could be a node’s in-degree if multiple nodes have the same in-degree.)\n\nOptionally, Copycat sets the selected attribute for source nodes that aren’t mapped and for target nodes whose attributes aren’t found in the source node map, as shown in Figure 2. (The caller can subsequently query and act upon selected source and target nodes using other CyREST endpoints). Also optionally, unmapped target nodes are laid out in a grid in the top right portion of the target view.\n\nAfter the Copycat layout, the target network nodes have the same coordinates as the corresponding source network nodes. Because the node H in the source and nodes N and O in the target are unmatched, they are selected.\n\nCopycat exposes a single endpoint with both a Commands and Functions6 variant, each with similar parameters. While both variants perform the same layout, Commands are most conveniently used in conjunction with the Cytoscape scripting facilities, and Functions are most convenient for calls from scripting languages such as Python and R.\n\nGenerally, the caller specifies the Cytoscape views containing the source and target networks (called sourceViewSUID and targetViewSUID) and provides the name of the mapping column in the source network (called sourceColumn) and target network (called targetColumn). It can control whether unmapped nodes are selected (called selectUnmapped) or the unmapped nodes are laid out in a grid (called gridUnmapped).\n\nNote that the caller can determine columns available for a network via a CyREST endpoint, such as /v1/networks/{networkId}/tables/defaultnode/columns.\n\nNote that the source and target columns must both be either string types or integer types.\n\nThe Copycat endpoint returns a CIResponse7 according to Cytoscape Automation best practices. If the call succeeds, the CIResponse contains a layout result object (as the data element); otherwise, it contains an explanation of the error (as the errors element):\n\n\n\nThe mappedNodeCount value contains the number of target nodes successfully mapped, and the unmappedNodeCount contains the number of target nodes that did not correspond to a source node. To calculate the number of unmapped source nodes, the user can execute a GET request at /networks/{networkSUID}/nodes/selected and get the length of the list returned.\n\nIf either of the network views cannot be resolved by the given SUIDs or the mapping columns cannot be found in the node table of the source and target network, the errors[0].status element returns 404, and the remainder of the errors[0] element contains additional information.\n\nTo apply Copycat Layout to a network, you must be running Cytoscape version 3.6.0 or later with at least 512MB of free memory.\n\nCopycat as a Function. The Functions endpoint is documented in a Swagger page in the Layouts section available via Help → Automation → CyREST API.\n\nThe caller must pass the SourceViewSUID and targetViewSUID parameters as part of the URL (/v1/apply/layouts/copycat/{sourceViewSUID}/{targetViewSUID}), and the remaining parameters in a JSON payload object:\n\n\n\nNote that unlike other layout REST endpoints, the Copycat Function requires an HTTP PUT request because it consumes a JSON payload with extra parameters -- the other CyREST layout endpoints require GET requests with optional parameters directly in the URL.\n\nNote that the sourceViewSUID and targetViewSUID must be either integers or the word “current” (meaning the view currently selected in Cytoscape). The caller can determine a network view’s SUID via a number of CyREST endpoints, including /v1/networks/views/currentNetworkView. The sourceViewSUID and targetViewSUID must reference different views.\n\nExample code is provided in R, Python and as a Bash curl, but can easily be adapted into any language that supports REST calls. (Note that you must determine the values of sourceViewSUID and targetViewSUID independently ahead of these calls.)\n\nR\n\n\n\nPython\n\n\n\nBash\n\n\n\nCopycat as a Command. The Commands version is documented in the Swagger page reachable from Cytoscape’s Help → Automation → CyREST Command API and can be found in the layouts section. Commands are executable via the Cytoscape Command Tool8 as well as the CyREST Commands API.\n\nThe caller must pass the sourceViewSUID and targetViewSUID as part of the payload object instead of the URL. They must be the name of a network (e.g., “galFiltered.sif”) instead of an SUID integer; Copycat will operate on the primary view for the network.\n\nNote that like other layout Command REST endpoints, the Copycat Function requires an HTTP POST request.\n\nNote that with the current CyREST Commands, it is difficult to query Cytoscape to find the name of a network. The caller must know the name in advance, likely as a result of creating or naming it by using a different Command prior to the Copycat call (e.g., /v1/commands/network/create empty).\n\nExample code is provided in R, Python and as a Bash curl, but can easily be adapted into any language that supports REST calls. (Note that you must determine the values of sourceNetworkName and targetNetworkName independently ahead of these calls.)\n\nR\n\n\n\nPython\n\n\n\nBash\n\n\n\nAdditionally, the Copycat Command can be executed directly via the Cytoscape Command Tool or as part of a Cytoscape Command script as shown below, where each parameter is specified using a key-value pair. Optional parameters can be specified by appending additional key-value pairs.\n\nlayout copycat sourceNetwork={sourceNetworkName} targetNetwork={targetNetworkName}\n\nMany biological analyses rely on performing differential analysis on derived subnetworks. In the case of inferring gene regulatory networks9, it is beneficial to show mutations at each time-step without touching the unchanged parts of the network. Identifying and visualizing the changes over time is much easier with Cytoscape Automation. After loading networks into Cytoscape, the CyREST API can be used to fetch the SUID of networks and views, and the names of columns in the node table. What follows are some of the essential steps for this kind of analysis.\n\n\n\nInsert the network view SUIDs into the sample scripts listed above to apply the Copycat layout, and the CyREST API will return the number of mapped and unmapped nodes in the target layout. Once the layout has completed, add a GET request to /networks/{SUID}/views/{viewSUID}.png to generate images of the aligned networks. Adding the selectUnmapped parameter and a few calls to the core Cytoscape network API suite allows callers to take full advantage of the layout results by getting unmapped node information.\n\n\n\nThe complete working example of this workflow is available in a Jupyter notebook found in the copycat-layout Github repository at https://github.com/cytoscape/copycat-layout/blob/master/notebooks/Copycat%20Automation%20Example.ipynb.\n\n\nDiscussion\n\nUsing a hashmap to store node attributes and locations within solely the source network allows Copycat layout to run in O(N+M) time while only requiring O(N) memory, where N is the number of nodes in the source network and M is the number of nodes in the target network. Note that the layout performance is independent of the number of edges in either network, as edges are not accounted for in this algorithm.\n\nIt is also important to note that the Commands endpoint is automatically generated by Cytoscape Tunables and is meant to be used via Cytoscape Command Tools, whereas the CyREST Functions are crafted by the developer specifically for use in Python- and R-based automation scripts. For this reason, the Functions endpoint will be better documented, provide better errors, and accept more script-friendly parameters, such as SUIDs.\n\n\nFuture plans\n\nThe Copycat API follows the same Semantic Versioning10 best practices established by Cytoscape and CyREST, and future modifications will respect the same parameters, functionality, and expected outputs specified in this paper. CopycatLayout aims to be an integral tool for layout alignment in Cytoscape, and intends to accommodate future expectations of alignment analysis without overcomplicating the interface. By integrating repetitive use cases into the API (e.g., returning the list of unmapped node SUIDs as well as their count), we can provide better tools to researchers and greatly reduce the complexity of automation scripts.\n\n\nSummary\n\nIn this paper, we presented the Copycat layout, a network differential analysis app for Cytoscape 3. Copycat provides basic visual alignment functionality that will open the door to more data-centric alignment algorithms.\n\nCopycat layout is also exposed via the Cytoscape CyREST API for automation users and scripted analyses. Copycat can be combined with other layouts and CyREST calls to better understand network transformations, and generate clean network difference figures.\n\n\nData availability\n\nAll data underlying the results are available as part of the article and no additional source data are required.\n\n\nSoftware availability\n\nThe copycatLayout app is delivered as a component of Cytoscape starting with v3.6.0, and is also available on the Cytoscape App Store: http://apps.cytoscape.org/apps/copycatLayout.\n\nSource code available from: https://github.com/cytoscape/copycat-layout.\n\nArchived source code at time of publication: https://doi.org/10.5281/zenodo.128730711.\n\nLicense: GNU Lesser General Public License v2.1.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported with funding from the National Resource for Network Biology (NRNB) under award number P41 GM103504 and the National Institute of General Medical Sciences (NIGMS) under award number R01 GM070743, both assigned to Trey Ideker.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe would like to thank Justin Huang and Dan Carlin for being the first users of Copycat and for providing valuable feedback during the development process.\n\n\nReferences\n\nNavigation and Layout — Cytoscape User Manual 3.6.0 documentation. Reference Source\n\nCytoscape App Store - layoutSaver. [cited 17 Apr 2018]. Reference Source\n\ncytoscape-automation. Github. Reference Source\n\nFielding RT, Taylor RN: Principled Design of the Modern Web Architecture. ACM Trans Internet Technol. New York, NY, USA: ACM; 2002; 2(2): 115–150. Publisher Full Text\n\nOno K, Muetze T, Kolishovski G, et al.: CyREST: Turbocharging Cytoscape Access for External Tools via a RESTful API [version 1; referees: 2 approved]. F1000Res. 2015; 4: 478. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCytoscape Automation FAQ: What is the difference between Commands and Functions? In: Google Docs. [cited 18 Apr 2018]. Reference Source\n\nApp Developers: Cytoscape Function Best Practices - CIResponse. [cited 18 Apr 2018]. Reference Source\n\nCommand Tool — Cytoscape User Manual 3.6.0 documentation. Reference Source\n\nCarlin DE, Paull EO, Graim K, et al.: Prophetic Granger Causality to infer gene regulatory networks. PLoS One. 2017; 12(12): e0170340. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPreston-Werner T: Semantic Versioning 2.0.0. In: Semantic Versioning. [cited 18 Apr 2018]. Reference Source\n\nSettle B, dotasek, Morris S, et al.: cytoscape/copycat-layout: Copycat Layout (Version v1.2.3). Zenodo. 2018. Data Source"
}
|
[
{
"id": "35312",
"date": "04 Jul 2018",
"name": "Bernhard Mlecnik",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nShort summary\nThe Copycat Cytoscape layout plugin presented here maps the node locations of a source network to the node locations of a target network and handles missing nodes. Important is that the source node ids uniquely match the target node ids to avoid network inconsistencies after applying the layout.\nThe Copycat layout is a straightforward to use plugin. The application is simple but powerful and Cytoscape needs this kind of layout algorithm.\nIt is also important the the developers offer a REST enabled service to this plugin which is thoroughly described for different programming languages in the article.\nThe only point the authors could address is a more sophisticated way of handling node inconsistencies. E.g. give a name/id list of nodes that are not unique in the target network to inform the user about this problem instead of just overlaying these nodes on the target network.\nWill the authors of this plugin consider edge information in the future? E.g. to give an option to map only nodes that are linked in the source and target network.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3875",
"date": "03 Aug 2018",
"name": "Brett Settle",
"role": "Author Response",
"response": "We would like to thank the referee for the positive feedback and respond to the referee’s comments: In the current version, a non-unique attribute column used as the mapping variable should be handled better in future versions of the app. Currently we are deciding how best to deal with these mappings. We would be very interested in adding functionality to restrict layout mapping to edges instead of nodes, given use cases and/or expected results. These more difficult alignments will be considered for future releases."
}
]
},
{
"id": "35315",
"date": "09 Jul 2018",
"name": "Katja Luck",
"expertise": [
"Reviewer Expertise I am a computational biologist who has specialized in systems biology",
"network biology",
"and data integration. I have worked many times with Cytoscape using its interface as well as its programmatic access points."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article Copycat Layout: Network layout alignment via Cytoscape Automation describes an evolution of an existing Cytoscape App that allows the mapping of the layout of one network onto another network by matching the nodes in both networks using a common node attribute. As a frequent Cytoscape user, I have very much awaited for such a Cytoscape functionality to be developed as it clearly fills a gap.\nFor programmers and frequent users of the Cytoscape automation functions, the article is very understandable and provides all necessary information for judging whether it suits the user's needs and how to apply it. However, for non-computational users of Cytoscape, some more information could be provided helping them to know whether it solves their needs and how to use it. I might have missed it but I cannot locate information within the article or from a reference or link where to find information how and if the App can be used within Cytoscape without using the REST API or the command line tools of Cytoscape. If it is missing, it might add value to the article to provide such an example. For example, a common user scenario that I have already encountered many times in collaborations with biologists is that we had manually optimized the layout of a network (a very time-consuming procedure) to then later realize that the network data has changed again. This meant to reload the new network and start from scratch the manual layout optimization. I understand that with the Copycat App, the layout from the previous network (in this article corresponding to the source network) can be transferred to the new network (target network) solving the aforementioned problem. It might be helpful to many readers to quickly see from within the article or elsewhere in the documentation or tutorial how to solve this problem with this new App without using scripts.\nThe whole article is written in very technical terms. Even though for most of them references are provided, the article and software might be more accessible and therefore more used, especially by non-programmers, if efforts were taken to reduce some of the technicalities in the main text.\nA suggestion for future improvements of the App is the possibility to highlight differences in links between the source and target network. A nice example for such a use case is shown in Figure 1 of the paper. The networks differ by one node as highlighted by the select tool, however, the networks also differ by some links connecting nodes that are present in both networks. These link differences can be easily missed by visual inspection, especially if the networks contains more nodes than in this example, but might be extremely valuable to spot or identify by the user. I think highlighting unique links to both networks would be a great plus.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3874",
"date": "03 Aug 2018",
"name": "Brett Settle",
"role": "Author Response",
"response": "We would like to thank the referee for positive feedback and helpful comments. Below we acknowledge the notes mentioned in the review: This paper was published as part of a collection focused on Cytoscape Automation via CyREST, and so we did not mention applying the copycat layout via Cytoscape Desktop. A short section will be added at the beginning for non-REST users, and it will present the very use case you suggest. This is a very helpful note for future papers on Automation and we will take into account the technical terms that are used. We very much anticipate an interest in network comparison highlighting of non-matching edges as well as nodes. This will definitely be a feature in a future version of the app. Thank you for the very helpful use case."
}
]
}
] | 1
|
https://f1000research.com/articles/7-822
|
https://f1000research.com/articles/7-165/v1
|
08 Feb 18
|
{
"type": "Research Note",
"title": "DNA sequence features in the establishing of H3K27ac",
"authors": [
"Anatoliy Zubritskiy",
"Yulia A. Medvedeva",
"Anatoliy Zubritskiy"
],
"abstract": "The presence of H3K27me3 has been demonstrated to correlate with the CpG content. In this work, we tested whether H3K27ac has similar sequence preferences. We performed a translocation of DNA sequences with various properties into a beta globin locus to control for the local chromatin environment. We demonstrate that H3K27ac is not linked to CpG content of the sequence, while extremely high GC-content may contribute to the establishment of this mark.",
"keywords": [
"histone modification",
"H3K27ac",
"GC-content",
"CpG dinucleotides"
],
"content": "Introduction\n\nHistone modification is a key mechanism of epigenetic regulation. Histone modification varies between cells in some genomic locations but not others. This observation raises the question to what extent histone modifications depend on the underlying nucleotide sequences. It has been reported that the attraction of PRC2 complex and consequent H3K27me3 is correlated with the local density of CpG dinucleotides1. More complex sequence patterns, such as transcription factor (TF) binding sites (TFBS), also affect the presence of histone modifications; SUZ12, a member of the PRC2 complex, binds DNA in a sequence-specific manner; NRCF and ZBTB33 recruit histone deacetylase. The ENCODE project demonstrated a specific histone modification profile around binding sites of many TFs2.\n\nSince the same lysine residue cannot be both methylated and acetylated, the presence of H3K27ac is negatively correlated with the presence of H3K27me33. Although there are several pieces of evidence showing that H3K27me3 is established at least partially in a sequence-specific manner, it is unclear if H3K27ac — an antagonistic activator to a repressive H3K27me3 — shares any sequence specific patterns. A computational approach4 predicted some TF to be linked to H3K27ac, yet the results were dependent on the a background set. In this work, we perform a direct experiment to test whether a specific genomic sequence is capable of recovering H3K27ac.\n\n\nMethods\n\nWe selected a site for Cas9 in the intergenic region of human beta globin locus using services CCTop5 (hg38) and Off-Spotter6 (hg38). The chosen targeting sequence was CTTGTCCCTGCAGGGTATTA. Then we designed targeting oligonucleotides (listed in Table 2) for pSpCas9(BB)-2A-Puro (pX-459 plasmid, a gift from E.P.). Oligonucleotides Bgl1Cas_F and Bgl1Cas_R were diluted to 10µM each with annealing buffer (10 mM Tris, pH 7.5, 100 mM NaCl, 1 mM EDTA), heated for 5 minutes to 95°C and then slowly cooled down to room temperature. 500ng of pX459 was digested with 5U of BpiI (Thermo Scientific) for 1h at 37°C, heat inactivated, run on 1.5% agarose, linearized plasmid was cut out and purified with QIAquick Gel extraction kit (Qiagen, cat. no. 28706) and ligated with Bgl1Cas_F/Bgl1Cas_R duplex. Ligation product was transformed into E. coli Top-10 cells, one of the clones was chosen and the insert was confirmed by Sanger sequencing. This plasmid was called pX459-b1.\n\nSelection cassette design (HSV thymidine kinase, 2A peptide, G418R in one frame) was performed with Ugene7. HSVtk and NeoR sequences were PCR amplified with Phusion polymerase (Thermo Scientific) from pHSVTK-Neo (a gift from E.P.) with primers HSVtk_F, HSVtk_R, and G418_F, G418_R, correspondingly. T2A peptide coding sequence, corresponding to amino acid sequence GSGEGRGSLLTCGDVEENPGP, was synthesized by hybridization of oligonucleotides T2A(+) and T2A(-) in annealing buffer. Hybridized duplex was treated with T4 PNK (Thermo Scientific) and gel purified. NeoR fragment was digested with XbaI (Thermo Scientific), T2A fragment was digested with NheI (Thermo Scientific), then these fragments were ligated and the fragment of expected size was gel purified. This T2A-NeoR fragment was digested with XhoI (Thermo Scientific), HSVtk fragment was digested with SalI (Thermo Scientific), fragments were ligated and the fragment of predicted length was gel purified again. This HSVtk-T2A-NeoR fragment was double digested with EcoRI (Thermo Scientific) and BshTI (Thermo Scientific) and ligated with pX459-b1 double digested with EcoRI and BshTI. This step gave the plasmid with full length selection cassette called pHSVtk-T2A-G418R. Then plasmid pHSVtk-T2A-G148R was double digested with XbaI and Acc65I (Thermo Scientific), gel purified and ligated with hybridized AdaptUp(+)-AdaptUp(-) duplex. A successful insert was verified by digestion of newly introduced restriction sites: SalI and BamHI. This plasmid was called pAdaptUp-HSVtk-T2A-G148R. This plasmid was digested with NotI (Thermo Scientific), treated with FastAP (Thermo Scientific), gel purified and ligated with AdaptDown(+)-AdaptDown(-) duplex to introduce 2xBpiI site that generate half-sites for BclI and XhoI after cleavage. Resulting plasmid was transformed into E. coli Top-10 cells and called pAdaptUp-HSVtk-T2A-G418R-AdaptDown. To obtain LoxP-flanked sequences, we used a pBK-CMV-derived plasmid with modified multiple cloning site containing ordered sites for BclI, NheI, and XhoI. This plasmid was transformed into E. coli JM110 to eradicate Dam methylation, double digested with BclI and NheI, dephosphorylated, gel purified and ligated with LoxP(+)- LoxP(-) duplex, treated with T4 PNK. This plasmid was transformed into E. coli Top10 cells and called pLoxP. To obtain homology regions that flank Cas9 cleavage site we PCR amplified them from Caki1 gDNA, using primer pairs Bgl1Up_F and Bgl1Up_R for an upstream fragment and Bgl1 Down_F and Bgl1Down_R for a downstream fragment. These PCR fragments were double digested with NheI and XhoI and ligated with pLoxP, double digested by the same sites and dephosphorylated. Ligation products were transformed into E. coli JM110, and purified plasmids were digested with BclI and XhoI to yield upstream and downstream fragments of DNA bearing LoxP site on its end. Upstream fragment was ligated with pAdaptUp-HSVtk-T2A-G148R-AdaptDown double digested with SalI and BamHI to give pUp1L-HSVTK-2a-G418R-AdaptDown. Downstream fragment was ligated with pUp1L-HSVTK-2a-G418R-AdaptDown treated with BpiI and dephosphorylated. This step brought us pUp1L-HSVTK-2a-G418R-L1Down.\n\nWe co-transfected Caki1 cells with pX459-b1 and pUp1L-HSVTK-2a-G418R-L1Down using Lipofectamine 3000 (Thermo Scientific) in 2cm2 wells following manufacturers instructions. After one week of Puromycin (3µg/ml) selection, cells were split into 96-well plates and selected with 1mg/ml G418 for two weeks. Cells from successfully growing clones were split into two equal aliquots, one for growth and another for genomic DNA isolation. Clones were checked for presence of insert with primer pair BGL1pcr_F - BGL1pcr_R that surround the insertion site. One clone (called Caki1-GcvS-G418R) with homozygous insertion was selected for further work.\n\nPlasmid pBK-CMV was digested with SacI and HpaI, blunted with T4 polymerase, self-ligated, transformed into E. coli Top10 and called pBCK-CMVdHpaI-SacI. LoxP(+) oligo was PNK treated and annealed with LoxP(-) to form a duplex. Bait(+) and Bait(-) oligos were annealed and ligated with hemi-phosphorylated LoxP duplex. Ligation products were resolved on 3% agarose gel, the longest fragment was excised and purified with QIAquick gel extraction kit, treated with T4 PNK and ligated to NheI treated and dephosphorylated pBCK-CMVdHpaI-SacI. Ligation product was transformed into E. coli Top10 and called p1L-bait-L1. Ten sequences chosen to be insertion targets were PCR amplified from genomic DNA of Caki1 cells with primer pairs 01_Hi_Ac_FUBP1_F...10_No_Ac_GC53_R. Amplicons were gel purified, diluted to a concentration of 100nM, treated with T4 PNK and ligated with a plasmid p1L-bait-L1 treated with Ecl136II and dephosphorylated. Then library of ligation products was transformed into E. coli Top10, plasmid library was purified using plasmid mini kit (Evrogen, cat. no. BC021) and co-transfected with pBS598 EF1alpha-EGFPcre (Addgene) to Caki1-GcvS-G418R cells for Cre-mediated recombination exchange of the insertion and the HSVTK-2a-G418 cassette. After 3-day growth Ganciclovir (2µM) was added to eliminate cells that did not undergo recombination. After selection for 10–14 days survived cells were grown to subconfluent monolayer in 10cm dishes and then ChIP on H3K27Ac was performed.\n\nChIP was performed according to Abcam X-chip protocol with following modification: we increased the number of washes in High Salt buffer from one to three times. Sonication was performed in PCR tubes (SSIbio, cat. no. 3245-00) on ice using Sonics vibra cell VCX 130 with an eight-element probe (cat. no. 630-0602). Sonication setup was: 10s pulse, 20s pause, 75% power, total sonication time is 30min. We used 2µg of Anti-H3K27Ac antibody (Abcam, ab4729) per ChIP. Control ChIP was performed in the same conditions with Caki1 cells.\n\nAfter ChIP 1ng of obtained DNA fragments was PCR amplified with primer pair BaitF - BaitR. At this step we amplified all DNA sequences inserted between this primer pair. Then 10ng of PCR amplicons from the first step was used as a template for a PCR with primer pairs 01_Hi_Ac_FUBP1_F...10_No_Ac_GC53_R, each pair in an individual reaction tube.\n\n\nResults and Discussion\n\nNone of the GC- and CpG rich promoter regions, that were acetylated in their original genomic loci (rows 1–3 in Table 1 and lanes 1–3 in Figure 1) recovered H3K27ac after relocation to a foreign genomic context in the beta globin locus, suggesting that H3K27ac may not depend directly on such features. An alternative explanation is that some of the CpG dinucleotides became methylated in a foreign genomic environment. Surprisingly, two extremely GC-rich but CpG poor (and, therefore, unmethylated) sequences (rows 5, 6 in Table 1 and lanes 5, 7 in Figure 1) gained H3K27ac in the foreign environment, while in their native environment (in their original genomic location) they had no H3K27ac. Sequences 5 and 6 are located far from promoters of known genes. Sequence 5 contains a lowly expressed CAGE cluster, representing a weak alternative promoter of the KMT2D gene (ZENBU). Therefore, we conclude that the gain of H3K27ac in these regions is unlikely to be explained by transcriptional activity. The lack of H3K27ac in sequence 6 in the native location may be due to the presence of the antagonistic mark H3K27me3, which is lost after translocation to the foreign environment. Yet, for sequence 5, H3K27me3 is absent in all cell types reported in ENCODE. We conclude that the establishment of H3K27ac is not dependent on a CpG content of the sequence per se, as opposed to its antagonistic mark H3K27me3, while it might depend on an extremely high GC-content.\n\n\nData availability\n\nDataset 1: Gel image of amplification of target sequences from native genomic loci compared with amplification from beta-globin locus with or without ChIP on H3K27ac. DOI, 10.5256/f1000research.13441.d1927748",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed\n\n\nGrant information\n\nThis work was supported by the Russian Science Foundation [15-14-30002].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are thankful to Egor Prokhorchuck for providing access to the laboratory equipment, cell lines, plasmid library and help with the experimental setup.\n\n\nReferences\n\nJermann P, Hoerner L, Burger L, et al.: Short sequences can efficiently recruit histone H3 lysine 27 trimethylation in the absence of enhancer activity and DNA methylation. Proc Natl Acad Sci U S A. 2014; 111(33): E3415–3421. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoffman MM, Ernst J, Wilder SP, et al.: Integrative annotation of chromatin elements from ENCODE data. Nucleic Acids Res. 2013; 41(2): 827–841. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTie F, Banerjee R, Stratton CA, et al.: CBP-mediated acetylation of histone H3 lysine 27 antagonizes Drosophila Polycomb silencing. Development. 2009; 136(18): 3131–3141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhitaker JW, Chen Z, Wang W: Predicting the human epigenome from DNA motifs. Nat Methods. 2015; 12(3): 265–272, 7 p following 272. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStemmer M, Thumberger T, Del Sol Keyer M, et al.: CCTop: An Intuitive, Flexible and Reliable CRISPR/Cas9 Target Prediction Tool. PLoS One. 2015; 10(4): e0124633. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPliatsika V, Rigoutsos I: \"Off-Spotter\": very fast and exhaustive enumeration of genomic lookalikes for designing CRISPR/Cas guide RNAs. Biol Direct. 2015; 10: 4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOkonechnikov K, Golosova O, Fursov M, et al.: Unipro UGENE: a unified bioinformatics toolkit. Bioinformatics. 2012; 28(8): 1166–1167. PubMed Abstract | Publisher Full Text\n\nZubritskiy A, Medvedeva YA: Dataset 1 in: DNA sequence features in the establishing of H3K27ac. F1000Research. 2018. Data Source"
}
|
[
{
"id": "30746",
"date": "20 Feb 2018",
"name": "Vasily N Aushev",
"expertise": [
"Reviewer Expertise Cancer biomarkers",
"oncogenesis",
"signal transduction",
"miRNA regulation"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study aimed to test whether H3K27 acetylation depends on the properties of the target sequence itself, or rather defined by the surrounding genomic context. To assess this question, authors translocated sequences with different CG% and CpG content, to the same beta globin locus, and then tested their H3K27ac in the new location.\nAs three tested highGC-highCpG sequences lost their H3K27ac in the new location, authors conclude that the context is more important that GC or CpG properties of the sequence itself. At the same time, two other sequences actually acquire H3K27ac in the same new location, and only one of those can be possibly explained by the antagonistic H3K27me3 mark.\nAuthors describe their methods with all the details which should be sufficient for reproducibility of their main results. I had, however, some minor technical questions regarding their plasmids construction procedure. For example: 1) From the sequence of T2A oligos they provide, peptide sequence seems to be PSPLPSPCLLTCGDVEENPGP, not GSGEGRGSLLTCGDVEENPGP. The latter one would be present in the original pX459 plasmid, but not in the constructed one. 2) To my understanding, annealing of T2A(+) and T2A(-) oligos will produce a hybrid with only 18bp double-stranded overlap - for further digestion with NheI, its single-stranded ends need to be built up with some polymerase; I don't see this step in the Methods. 3) I don't see XhoI site in the T2A-NeoR fragment, and don't see any other potential site compatible with SalI of HSVtk fragment. In addition, it is recommended that all the source plasmids used should be properly referred - either by their original publication, or by the sequence reference in any available repository (Genbank, Addgene, etc); referring as \"pHSVTK-Neo (a gift from E.P.) \" is not sufficient if the plasmid's sequence cannot be explicitly found by that name. Finally, authors should double-check their text for typos in the names of all constructs and genes (for ex., GAPGH), be sure to use all the titles exactly as they are originally defined (for ex., PX459 instead of pX-459, etc). All these issues, however, are minor and do not distort the main content of the paper.\nFrom the technical point of view, the only question I had is about the final readout for H3K27ac: authors perform nested PCR followed by visualization of the PCR products on the gel. I am wondering this method was chosen and not qPCR (as more quantitative) or ChIP-seq (as more unbiased).\nOne of my main concerns is about the reproducibility of the findings: authors do not report any biological or technical replicates which makes the results potentially questionable.\nTalking about the interpretation of the results, I am not sure how strong can be the conclusion based on only 10 tested sequences, and even more so, only one genomic context. Even with this limited number of tests, some results cannot be easily explained (for example, acquired H3K27ac for sample #5), so in future it would be necessary to expand the panel of both sequences and contexts.\nBesides above-mentioned questions (hopefully authors can address at least some of them in the revised version), I think the paper is clearly of scientific interest and adds important piece of knowledge to its field.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3824",
"date": "09 Aug 2018",
"name": "Anatoliy Zubritskiy",
"role": "Author Response",
"response": "We would like to thank the reviewer for the thorough reading of our paper and very useful comments.Q: From the sequence of T2A oligos they provide, peptide sequence seems to be PSPLPSPCLLTCGDVEENPGP, not GSGEGRGSLLTCGDVEENPGP. The latter one would be present in the original pX459 plasmid, but not in the constructed one.A: We have to politely disagree with the reviewer on this point. According to https://eu.idtdna.com/calc/analyzer in hetero-dimer mode, the annealed T2A(+)/T2A(-) duplex with minimal dG value looks as follows:AAAGCTCGAGGGCAGTGGAGAGGGCAGAGGAAGTCTGCTAACATGCGGTG |||||||||||||||||| CAGACGATTGTACGCCACTGCAGCTCCTCTTAGGACCGGGTCGATCGCTTAFor a double-stranded DNA synthesized with the T4 polymerase, the (+) strand looks as follows:AAAGCTCGAGGGCAGTGGAGAGGGCAGAGGAAGTCTGCTAACATGCGGTGACGTCGAGGAGAATCCTGGCCCAGCTAGCGAATThis sequence contains two restriction sites (XhoI and NheI). The sequence between restriction sites is translated as GSGEGRGSLLTCGDVEENPGP in 5’->3’ direction, frame 1 (https://web.expasy.org/cgi-bin/translate/dna2aa.cgi).Q: To my understanding, annealing of T2A(+) and T2A(-) oligos will produce a hybrid with only 18bp double-stranded overlap - for further digestion with NheI, its single-stranded ends need to be built up with some polymerase; I don't see this step in the Methods.A: We thank the reviewer for pointing out this mistake. The typo in Methods section is corrected and enzyme name is changed from T4 PNK to T4 Polymerase. We modified the Methods section and added the following text:\"T2A peptide coding sequence corresponding to amino acid sequence GSGEGRGSLLTCGDVEENPGP was synthesized by hybridization of oligonucleotides T2A(+) and T2A(-) in annealing buffer and consequent treatment of hybridized duplex with T4 polymerase (Thermo Scientific) followed by gel purification.\"Q: I don't see XhoI site in the T2A-NeoR fragment, and don't see any other potential site compatible with SalI of HSVtk fragment.A: The T2A oligonucleotide duplex contains an XhoI site (CTCGAG) in a 5’ regionAAAGCTCGAGGGCAGTGGAGAGGGCAGAGGAAGTCTGCTAACATGCGGTGACGTCGAGGAGAATCCTGGCCCAGCTAGCGAATand a primer HSVtk_R contains a SalI restriction site (GTCGAC) AATTGTCGACGTTAGCCTCCCCCATCTCC compatible with the XhoI site.Q: In addition, it is recommended that all the source plasmids used should be properly referred - either by their original publication, or by the sequence reference in any available repository (Genbank, Addgene, etc); referring as \"pHSVTK-Neo (a gift from E.P.) \" is not sufficient if the plasmid's sequence cannot be explicitly found by that name. Finally, authors should double-check their text for typos in the names of all constructs and genes (for ex., GAPGH), be sure to use all the titles exactly as they are originally defined (for ex., PX459 instead of pX-459, etc). All these issues, however, are minor and do not distort the main content of the paper.A: We carefully checked all the names of the genes and plasmids as well as the rest of the text for typos. The plasmid name pHSVTK-Neo is internal; this plasmid is refered as pBS246-neo/tk in the folowing paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1317619/. The reference describing a the plasmid pBS246-neo/tk is added to the main text:\"HSVtk and NeoR sequences were PCR amplified with Phusion polymerase (Thermo Scientific) from plasmid pBS246-neo/Tk (a gift from E.P.), construction of this plasmid is described elsewhere: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1317619/\"Q: From the technical point of view, the only question I had is about the final readout for H3K27ac: authors perform nested PCR followed by visualization of the PCR products on the gel. I am wondering this method was chosen and not qPCR (as more quantitative) or ChIP-seq (as more unbiased).A: For a pilot study we decided to focus on making sure that the insertion of the selected fragments into a foreign environment (beta-globin locus) did take place as a proof of principle therefore we have chosen to perform nested PCR. In this regards qPCR will not make the conclusions more reliable. Performing a regular ChIP-seq experiment after the insertion seems like a bit of overkill since histone mark only in one region is expected to be changed. On the other hand a modified version of ChIP-seq experiment with multiple insertions may indeed be more unbiased. Yet, we believe that this much more complicated experimental setup is beyond the scope of the current paper.Q: One of my main concerns is about the reproducibility of the findings: authors do not report any biological or technical replicates which makes the results potentially questionable.A: The insertion of 10 target sequences was performed in two independent biological replicates, then every replicate was ChIP'd independently and resulting DNA was pooled before nested PCR step. To avoid any further confusions the corresponding text was added to the Methods:Section “Construction of a recombination target”“This step was performed in two independent replicates.” was added to theSection “Chromatin Immuniprecipitation (ChIP)”“Biological replicates of targeted insertion of DNA sequences were processed independently”Section “Nested PCR”“the DNA was pooled”Q: Talking about the interpretation of the results, I am not sure how strong can be the conclusion based on only 10 tested sequences, and even more so, only one genomic context. Even with this limited number of tests, some results cannot be easily explained (for example, acquired H3K27ac for sample #5), so in future it would be necessary to expand the panel of both sequences and contexts.A: We agree with the reviewer that some of the results (seq #5) are puzzling although reproducible since we performed all the tests in two biological replicates. At this stage we decided to report the results of the pilot project and focus on the performing a more complex experiment based on ChIP-seq as mentioned above which hopefully will increase the number of tested sequences by the order of magnitude.Q: Besides above-mentioned questions (hopefully authors can address at least some of them in the revised version), I think the paper is clearly of scientific interest and adds important piece of knowledge to its field.A: Again, we are grateful to the reviewer for the high evaluation of our work, careful reading of the manuscript and useful comments."
}
]
},
{
"id": "34371",
"date": "04 Jun 2018",
"name": "Oleg Gusev",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study is focused on question of relation of primary genomic sequences and H3K27 acetylation preferences. Original method, based on the translocation of the\n\ndifferent CG% and CpG -based sequences to the same genomic locus, followed by H3K27ac analysis was employed. The results suggest that that the context is more important than GC or CpG properties of the genomic sequence itself. The results of certain interest for the wide pool of researcher working with epigenetic aspects. At the same time, I believe that involvement of more techniques, such as Chip-Seq would be suggested to confirm the results. Also, it is not clear from the text how many replicates were taken and how general is the conclusion, if, for example, different genomic loci (or different genome, for example extremely GC or AT-rich one) will be used.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3823",
"date": "09 Aug 2018",
"name": "Anatoliy Zubritskiy",
"role": "Author Response",
"response": "Q: The study is focused on question of relation of primary genomic sequences and H3K27 acetylation preferences. Original method, based on the translocation of the different CG% and CpG -based sequences to the same genomic locus, followed by H3K27ac analysis was employed. The results suggest that that the context is more important than GC or CpG properties of the genomic sequence itself. The results of certain interest for the wide pool of researcher working with epigenetic aspects.A: We thank the reviewer for a careful reading of the manuscript and useful comments. Although we have to disagree that our results show the context to be more important than GC or CpG content of the sequence. In fact, our results suggest that GC content may contribute to the establishing of H3K27ac. To avoid further confusions we modified the abstract and the results accordingly to make this statement more clear.“Our results suggest that in contrast to H2K27me3, H3K27ac gain is unlikely affected by the CpG content of the underlying DNA sequence, while extremely high GC-content might contribute to the gain of the H3K27ac”Q: At the same time, I believe that involvement of more techniques, such as Chip-Seq would be suggested to confirm the results.A: We agree with the reviewer that a ChIP-seq may provide a more unbiased approach. Yet, a regular ChIP-seq experiment after the insertion seems to produce a lot of waste since only one region is expected to be changed. On the other hand a modified version of ChIP-seq experiment with multiple insertions into the same location may indeed be more unbiased and will hopefully increase the number of tested sequences by the order of magnitude making the conclusions more solid. We are working on this protocol now but we believe that this much more complicated experimental setup is beyond the scope of the current pilot paper.Q: Also, it is not clear from the text how many replicates were taken and how general is the conclusion, if, for example, different genomic loci (or different genome, for example extremely GC or AT-rich one) will be used.A: The insertion of 10 target sequences was performed done in two independent biological replicates, then every replicateate was CHiP'd and resulting DNA was pooled before nested PCR step. To avoid any further confusions the corresponding text was added to the Methods.Section “Construction of a recombination target”“This step was performed in two independent replicates.” was added to theSection “Chromatin Immunoprecipitation (ChIP)”“Biological replicates of targeted insertion of DNA sequences were processed independently””Section “Nested PCR”“the DNA was pooled” was added to section “Nested PCR”The beta-globin locus was chosen based on its lack of the chromatin modifications in the most of the adult cell types so the surrounding chromatin does not interfere with the inserted sequence and the gain of the modifications in the inserted fragment is determined by its sequence. We agree that additional loci lacking chromatin marks might be also tested, yet for now we focus on the modified ChIP-seq protocol to test many more inserted sequences.At this stage we are not generalizing our conclusions to other genomes with the extreme AT- or GC-contents of the genome since apart of these changes these organisms may also have a quite different H3K27ac-attracting machinery."
}
]
}
] | 1
|
https://f1000research.com/articles/7-165
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.